博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Accessing a File (Linux Kernel)
阅读量:7091 次
发布时间:2019-06-28

本文共 6031 字,大约阅读时间需要 20 分钟。

hot3.png

Accessing Files

Different Ways to Access a File

ð  Canonical Mode (O_SYNC and O_DIRECT cleared)

ð  Synchronous Mode (O_SYNC flag set)

ð  Memory Mapping Mode

ð  Direct I/O Mode (O_DIRECT flag set, user space <-> disk)

ð  Asynchronous Mode

 

Reading a file is always page-based: the kernel always transfers whole pages of data at once.

Allocate a new page frame -> fill the page with suitable portion of the file -> add the page to the page cache -> copy the requested bytes to the process address space

 

Writing to a file may involve disk space allocation because the file size may increase.

 

Reading from a File

/**

 * do_generic_file_read - generic file read routine

 * @filp:  the file to read

 * @ppos:        current file position

 * @desc:        read_descriptor

 * @actor:       read method

 *

 * This is a generic file read routine, and uses the

 * mapping->a_ops->readpage() function for the actual low-level stuff.

 *

 * This is really ugly. But the goto's actually try to clarify some

 * of the logic when it comes to error handling etc.

 */

static void do_generic_file_read(struct file *filp, loff_t *ppos,

                   read_descriptor_t *desc, read_actor_t actor)

 

 

Read-Ahead of Files

Many disk accesses are sequential, that is, many adjacent sectors on disk are likely to be fetched when handling a series of process’s read requests on the same file.

Read-ahead consists of reading several adjacent pages of data of a regular file or block device file before they are actually requested. In most cases, this greatly improves the system performance, because it lets the disk controller handle fewer commands. In some cases, the kernel reduces or stops read-ahead when some random accesses to a file are performed.

 

Natural language description -> design (data structure + algo) -> code

Description:

ð  Read-ahead may be gradually increased as long as the process keeps accessing the file sequentially.

ð  Read-ahead must be scaled down when or even disabled when the current access is not sequential.

ð  Read-ahead should be stopped when the process keeps accessing the same page over and over again or when almost all the pages of the file are in the cache.

 

 

 

Design:

Current window: a contiguous portion of the file consisting of pages being requested by the process

 

Ahead window: a contiguous portion of the file following the ones in the current window

 

/*

 * Track a single file's readahead state

 */

struct file_ra_state {

       pgoff_t start;                     /* where readahead started */

       unsigned int size;              /* # of readahead pages */

       unsigned int async_size;   /* do asynchronous readahead when

                                      there are only # of pages ahead */

 

       unsigned int ra_pages;            /* Maximum readahead window */

       unsigned int mmap_miss;        /* Cache miss stat for mmap accesses */

       loff_t prev_pos;          /* Cache last read() position */

};

 

 

struct file {

       struct file_ra_state    f_ra;

}

 

When is read-ahead algorithm executed?

1.     Read pages of file data

2.     Allocate a page for a file memory mapping

3.     Readahead(), posix_fadvise(), madvise()

 

Writing to a File

Deferred write

 

Memory Mapping

ð  Shared Memory Mapping

ð  Private Memory Mapping

 

System call: mmap(), munmap(), msync()

mmap, munmap - map or unmap files or devices into memory

msync - synchronize a file with a memory map

 

The kernel offers several hooks to customize the memory mapping mechanism for every different filesystem. The core of memory mapping implementation is delegated to a file object’s method named mmap. For disk-based filesystems and for block devices, this method is implemented by a generic function called generic_file_mmap().

 

 

Memory mapping mechanism depends on the demand paging mechanism.

For reasons of efficiency, page frames are not assigned to a memory mapping right after it has been created, but at the last moment that is, when the process tries to address one of its pages, thus causing a Page Fault exception.

 

Non-Linear Memory Mapping

The  remap_file_pages()  system call is used to create a non-linear mapping, that is, a mapping in which the pages of the file are mapped into a non-sequen

       tial order in memory.  The advantage of using remap_file_pages() over using repeated calls to mmap(2) is that the former approach does not require the  ker

       nel to create additional VMA (Virtual Memory Area) data structures.

 

       To create a non-linear mapping we perform the following steps:

 

       1. Use mmap(2) to create a mapping (which is initially linear).  This mapping must be created with the MAP_SHARED flag.

 

       2. Use  one  or more calls to remap_file_pages() to rearrange the correspondence between the pages of the mapping and the pages of the file.  It is possible

          to map the same page of a file into multiple locations within the mapped region.

 

 

Direct I/O Transfer

There’s no substantial difference between:

1.     Accessing a regular file through filesystem

2.     Accessing it by referencing its blocks on the underlying block device file

3.     Establish a file memory mapping

 

However, some highly-sophisticated programs (self-caching application such as high-performance server) would like to have full control of the I/O data transfer mechanism.

 

Linux offers a simple way to bypass the page cache: direct I/O transfer.

O_DIRECT

 

Generic_file_direct_IO() -> __block_dev_direct_IO(), it does not return until all direct IO data transfers have been completed.

 

 

Asynchronous I/O

“Asynchronous” essentially means that when a User Mode process invokes a library function to read or write a file, the function terminates as soon as the read or write operation has been enqueued, possibly even before the real I/O data transfer takes place. The calling process thus continue its execution while the data is being transferred.

 

aio_read(3), aio_cancel(3), aio_error(3), aio_fsync(3), aio_return(3), aio_suspend(3), aio_write(3)

 

Asynchronous I/O Implementation

ð  User-level Implementation

ð  Kernel-level Implementation

 

User-level Implementation:

Clone the current process -> the child process issues synchronous I/O requests -> aio_xxx terminates in parent process

 

io_setup(2), io_cancel(2), io_destroy(2), io_getevents(2), io_submit(2)

 

转载于:https://my.oschina.net/u/158589/blog/68118

你可能感兴趣的文章
Xcode 使用Git User Interface State 问题
查看>>
我在群硕实习的日子
查看>>
个人知识管理是职场必修课
查看>>
基于 Android NDK 的学习之旅----- C调用Java(附源码)
查看>>
Python主流IDE对比:Eric VS. PyCharm
查看>>
alchim31压缩js和css文件
查看>>
J2EE 之二------------------- Servlet
查看>>
python argparse
查看>>
美团客户端响应式框架 EasyReact 开源啦
查看>>
《Java并发编程的艺术》笔记
查看>>
前有BAT,后出独角兽,第二梯队很焦虑
查看>>
煲仔饭与软件测试
查看>>
ORACLE同义词总结
查看>>
在linux下安装android以及C/C++开发环境
查看>>
分享:【视频:淘宝手机生活节测试分享】
查看>>
idea maven 新建多模块项目
查看>>
Sqlserver 过期
查看>>
vs2013新建文件自动保存为utf-8编码
查看>>
用系统命令加载磁盘 (隐藏文件) "学习资料"放的再深也不怕
查看>>
mysql忘记没密码
查看>>