Age | Commit message (Collapse) | Author |
|
This patch modifies to call set_buffer_new, if new blocks are allocated.
Reviewed-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
In __allocate_data_blocks, we should check current blkaddr which is located at
ofs_in_node of dnode page instead of checking first blkaddr all the time.
Otherwise we can only allocate one blkaddr in each dnode page. Fix it.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
This patch doesn't make any effect on previous behavior, since
f2fs_write_data_page bypasses writing the page during POR.
But, the difference is that this patch avoids holding writepages mutex.
This is to avoid the following false warning, since this can happen only
when mount and shutdown are triggered at the same time.
======================================================
[ INFO: possible circular locking dependency detected ]
4.0.0-rc1+ #3 Tainted: G O
-------------------------------------------------------
kworker/u8:0/2270 is trying to acquire lock:
(&sbi->gc_mutex){+.+.+.}, at: [<ffffffffa02bdd33>] f2fs_balance_fs+0x73/0x90 [f2fs]
but task is already holding lock:
(&sbi->writepages){+.+...}, at: [<ffffffffa02b261b>] f2fs_write_data_pages+0xcb/0x3a0 [f2fs]
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #2 (&sbi->writepages){+.+...}:
[<ffffffff810e2b11>] lock_acquire+0xe1/0x2f0
[<ffffffff8185e1b3>] mutex_lock_nested+0x63/0x530
[<ffffffffa02b261b>] f2fs_write_data_pages+0xcb/0x3a0 [f2fs]
[<ffffffff811c38c1>] do_writepages+0x21/0x50
[<ffffffff8126c5a6>] __writeback_single_inode+0x76/0xbf0
[<ffffffff8126e23a>] writeback_single_inode+0xea/0x1c0
[<ffffffff8126e425>] write_inode_now+0x95/0xa0
[<ffffffff81259dab>] iput+0x20b/0x3f0
[<ffffffffa02c1c8b>] recover_data.constprop.14+0x26b/0xa80 [f2fs]
[<ffffffffa02c2776>] recover_fsync_data+0x2b6/0x5e0 [f2fs]
[<ffffffffa02a9744>] f2fs_fill_super+0xb24/0xb90 [f2fs]
[<ffffffff8123d7f4>] mount_bdev+0x1a4/0x1e0
[<ffffffffa02a3c85>] f2fs_mount+0x15/0x20 [f2fs]
[<ffffffff8123e159>] mount_fs+0x39/0x180
[<ffffffff8125e51b>] vfs_kern_mount+0x6b/0x160
[<ffffffff81261554>] do_mount+0x204/0xbe0
[<ffffffff8126223b>] SyS_mount+0x8b/0xe0
[<ffffffff81863e6d>] system_call_fastpath+0x16/0x1b
-> #1 (&sbi->cp_mutex){+.+...}:
[<ffffffff810e2b11>] lock_acquire+0xe1/0x2f0
[<ffffffff8185e1b3>] mutex_lock_nested+0x63/0x530
[<ffffffffa02acbf2>] write_checkpoint+0x42/0x1230 [f2fs]
[<ffffffffa02a847d>] f2fs_sync_fs+0x9d/0x2a0 [f2fs]
[<ffffffff81272f82>] sync_filesystem+0x82/0xb0
[<ffffffff8123c214>] generic_shutdown_super+0x34/0x100
[<ffffffff8123c5f7>] kill_block_super+0x27/0x70
[<ffffffffa02a3c60>] kill_f2fs_super+0x20/0x30 [f2fs]
[<ffffffff8123ca49>] deactivate_locked_super+0x49/0x80
[<ffffffff8123d05e>] deactivate_super+0x4e/0x70
[<ffffffff8125df63>] cleanup_mnt+0x43/0x90
[<ffffffff8125e002>] __cleanup_mnt+0x12/0x20
[<ffffffff810a82e4>] task_work_run+0xc4/0xf0
[<ffffffff8101f0bd>] do_notify_resume+0x8d/0xa0
[<ffffffff81864141>] int_signal+0x12/0x17
-> #0 (&sbi->gc_mutex){+.+.+.}:
[<ffffffff810e2866>] __lock_acquire+0x1ac6/0x1c90
[<ffffffff810e2b11>] lock_acquire+0xe1/0x2f0
[<ffffffff8185e1b3>] mutex_lock_nested+0x63/0x530
[<ffffffffa02bdd33>] f2fs_balance_fs+0x73/0x90 [f2fs]
[<ffffffffa02b5938>] f2fs_write_data_page+0x348/0x5b0 [f2fs]
[<ffffffffa02af9da>] __f2fs_writepage+0x1a/0x50 [f2fs]
[<ffffffff811c1b54>] write_cache_pages+0x274/0x6f0
[<ffffffffa02b2630>] f2fs_write_data_pages+0xe0/0x3a0 [f2fs]
[<ffffffff811c38c1>] do_writepages+0x21/0x50
[<ffffffff8126c5a6>] __writeback_single_inode+0x76/0xbf0
[<ffffffff8126d44a>] writeback_sb_inodes+0x32a/0x710
[<ffffffff8126d8cf>] __writeback_inodes_wb+0x9f/0xd0
[<ffffffff8126dcdb>] wb_writeback+0x3db/0x850
[<ffffffff8126e848>] bdi_writeback_workfn+0x148/0x980
[<ffffffff810a3782>] process_one_work+0x1e2/0x840
[<ffffffff810a3f01>] worker_thread+0x121/0x460
[<ffffffff810a9dc8>] kthread+0xf8/0x110
[<ffffffff81863dbc>] ret_from_fork+0x7c/0xb0
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
If a page is cached but its block was deallocated, we don't need to make
the page dirty again by gc and truncate_partial_data_page.
In that case, it needs to check its block allocation all the time instead
of giving up-to-date page.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
If page's on-disk block was deallocated, let's remove up-to-date flag to avoid
further access with wrong contents.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
struct kiocb now is a generic I/O container, so move it to fs.h.
Also do a #include diet for aio.h while we're at it.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
We update extent cache for all user inode of f2fs including dir inode, so this
patch gives another chance to try to get physical address of page from extent
cache for dir inode.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
This patch switch to check FI_NO_EXTENT in f2fs_{lookup,update}_extent_cache
instead of f2fs_{lookup,update}_extent_tree or {lookup,update}_extent_info.
No functionality modification in this patch.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
This patch adds a fast lookup path for rb-tree extent cache.
In this patch we add a recently accessed extent node pointer 'cached_en' in
extent tree. In lookup path of extent cache, we will firstly lookup the last
accessed extent node which cached_en points, if we do not hit in this node,
we will try to lookup extent node in rb-tree.
By this way we can avoid unnecessary slow lookup in rb-tree sometimes.
Note that, side-effect of this patch is that we will increase memory cost,
because we will store a pointer variable in each struct extent tree
additionally.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
This patch adds trace for lookup/update/shrink/destroy ops in rb-tree extent cache.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
This patch enables rb-tree based extent cache in f2fs.
When we mount with "-o extent_cache", f2fs will try to add recently accessed
page-block mappings into rb-tree based extent cache as much as possible, instead
of original one extent info cache.
By this way, f2fs can support more effective cache between dnode page cache and
disk. It will supply high hit ratio in the cache with fewer memory when dnode
page cache are reclaimed in environment of low memory.
Storage: Sandisk sd card 64g
1.append write file (offset: 0, size: 128M);
2.override write file (offset: 2M, size: 1M);
3.override write file (offset: 4M, size: 1M);
...
4.override write file (offset: 48M, size: 1M);
...
5.override write file (offset: 112M, size: 1M);
6.sync
7.echo 3 > /proc/sys/vm/drop_caches
8.read file (size:128M, unit: 4k, count: 32768)
(time dd if=/mnt/f2fs/128m bs=4k count=32768)
Extent Hit Ratio:
before patched
Hit Ratio 121 / 1071 1071 / 1071
Performance:
before patched
real 0m37.051s 0m35.556s
user 0m0.040s 0m0.026s
sys 0m2.990s 0m2.251s
Memory Cost:
before patched
Tree Count: 0 1 (size: 24 bytes)
Node Count: 0 45 (size: 1440 bytes)
v3:
o retest and given more details of test result.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
This patch adds core functions including slab cache init function and
init/lookup/update/shrink/destroy function for rb-tree based extent cache.
Thank Jaegeuk Kim and Changman Lee as they gave much suggestion about detail
design and implementation of extent cache.
Todo:
* register rb-based extent cache shrink with mm shrink interface.
v2:
o move set_extent_info and __is_{extent,back,front}_mergeable into f2fs.h.
o introduce __{attach,detach}_extent_node for code readability.
o add cond_resched() when fail to invoke kmem_cache_alloc/radix_tree_insert.
o fix some coding style and typo issues.
v3:
o fix oops due to using an unassigned pointer.
o use list_del to remove extent node in shrink list.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Changman Lee <cm224.lee@samsung.com>
[Jaegeuk Kim: add static for some funcitons and declare in f2fs.h]
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
In this patch, we do these jobs:
1. rename {check,update}_extent_cache to {lookup,update}_extent_info;
2. introduce universal lookup/update interface of extent cache:
f2fs_{lookup,update}_extent_cache including above two real functions, then
export them to function callers.
So after above cleanup, we can add new rb-tree based extent cache into exported
interfaces.
v2:
o remove "f2fs_" for inner function {lookup,update}_extent_info suggested by
Jaegeuk Kim.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
This patch introduces f2fs_map_bh to clean codes of check_extent_cache.
v2:
o cleanup f2fs_map_bh pointed out by Jaegeuk Kim.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
Rename a filed name from 'blk_addr' to 'blk' in struct {f2fs_extent,extent_info}
as annotation of this field descripts its meaning well to us.
By this way, we can avoid long statement in code of following patches.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
Move ext_lock out of struct extent_info, then in the following patches we can
use variables with struct extent_info type as a parameter to pass pure data.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
This patch adds preallocation for data blocks to prepare f2fs_direct_IO.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
This patch fixes wrong handling of buffer_new flag in get_block.
If f2fs allocates new blocks and mapped buffer_head, it needs to set buffer_new
for the bh_result.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
This patch merges ->{invalidate,release}page function for meta/node/data pages.
After this, duplication of codes could be removed.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
If PagePrivate is removed by releasepage, f2fs loses counting dirty pages.
e.g., try_to_release_page will not release page when the page is dirty,
but our releasepage removes PagePrivate.
[<ffffffff81188d75>] try_to_release_page+0x35/0x50
[<ffffffff811996f9>] invalidate_inode_pages2_range+0x2f9/0x3b0
[<ffffffffa02a7f54>] ? truncate_blocks+0x384/0x4d0 [f2fs]
[<ffffffffa02b7583>] ? f2fs_direct_IO+0x283/0x290 [f2fs]
[<ffffffffa02b7fb0>] ? get_data_block_fiemap+0x20/0x20 [f2fs]
[<ffffffff8118aa53>] generic_file_direct_write+0x163/0x170
[<ffffffff8118ad06>] __generic_file_write_iter+0x2a6/0x350
[<ffffffff8118adef>] generic_file_write_iter+0x3f/0xb0
[<ffffffff81203081>] new_sync_write+0x81/0xb0
[<ffffffff81203837>] vfs_write+0xb7/0x1f0
[<ffffffff81204459>] SyS_write+0x49/0xb0
[<ffffffff817c286d>] system_call_fastpath+0x16/0x1b
Reviewed-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
Currently, there are several variables with Boolean type as below:
struct f2fs_sb_info {
...
int s_dirty;
bool need_fsck;
bool s_closing;
...
bool por_doing;
...
}
For this there are some issues:
1. there are some space of f2fs_sb_info is wasted due to aligning after Boolean
type variables by compiler.
2. if we continuously add new flag into f2fs_sb_info, structure will be messed
up.
So in this patch, we try to:
1. switch s_dirty to Boolean type variable since it has two status 0/1.
2. merge s_dirty/need_fsck/s_closing/por_doing variables into s_flag.
3. introduce an enum type which can indicate different states of sbi.
4. use new introduced universal interfaces is_sbi_flag_set/{set,clear}_sbi_flag
to operate flags for sbi.
After that, above issues will be fixed.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
This patch removes wrongly called unlock_page.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
This patch aligns the start block address of a file for direct io to the f2fs's
section size.
Some flash devices manage an over 4KB-sized page as a write unit, and if the
direct_io'ed data are written but not aligned to that unit, the performance can
be degraded due to the partial page copies.
Thus, since f2fs has a section that is well aligned to FTL units, we can align
the block address to the section size so that f2fs avoids this misalignment.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
This patch removes unnecessary function calls.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
This patch relocates some operations to avoid unnecessary execution.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
This patch uses dn->data_blkaddr as a parameter for the destination block
address.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
trace_f2fs_submit_{read_,write_,page_,page_m}bio with fio
Cleanup parameters for trace_f2fs_submit_{read_,write_,page_,page_m}bio with fio
as one parameter.
Suggested-by: Jaegeuk Kim <jaegeuk@kernel.org>
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
This patch adds missing parameter _type_ for trace_f2fs_submit_page_bio, then
use DECLARE_EVENT_CLASS/DEFINE_EVENT_CONDITION pair to cleanup some trace event
code related to f2fs_submit_page_{m,}bio.
Additionally, after we remove redundant code, size of code can be reduced:
text data bss dec hex filename
176787 8712 56 185555 2d4d3 f2fs.ko.org
174408 8648 56 183112 2cb48 f2fs.ko
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
This patch activates f2fs_trace_ios.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
This patch cleans up parameters on IO paths.
The key idea is to use f2fs_io_info adding a parameter, block address, and then
use this structure as parameters.
Reviewed-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
Now we use inmemory pages for atomic write only and provide abort procedure,
we don't need to truncate them explicitly.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
This patch adds two new ioctls to release inmemory pages grabbed by atomic
writes.
o f2fs_ioc_abort_volatile_write
- If transaction was failed, all the grabbed pages and data should be written.
o f2fs_ioc_release_volatile_write
- This is to enhance the performance of PERSIST mode in sqlite.
In order to avoid huge memory consumption which causes OOM, this patch changes
volatile writes to use normal dirty pages, instead blocked flushing to the disk
as long as system does not suffer from memory pressure.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
Fix the wrong error number in error path of f2fs_write_begin.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
A deadlock can be occurred:
Thread 1] Thread 2]
- f2fs_write_data_pages - f2fs_write_begin
- lock_page(page #0)
- grab_cache_page(page #X)
- get_node_page(inode_page)
- grab_cache_page(page #0)
: to convert inline_data
- f2fs_write_data_page
- f2fs_write_inline_data
- get_node_page(inode_page)
In this case, trying to lock inode_page and page #0 causes deadlock.
In order to avoid this, this patch adds a rule for this locking policy,
which is that page #0 should be locked followed by inode_page lock.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
We should put the inode page when error was occurred.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
__submit_merged_bio f2fs_write_end_io f2fs_write_end_io
wait_io = X wait_io = x
complete(X) complete(X)
wait_io = NULL
wait_for_completion()
free(X)
spin_lock(X)
kernel panic
In order to avoid this, this patch removes the wait_io facility.
Instead, we can use wait_on_all_pages_writeback(sbi) to wait for end_ios.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
This patch simplifies the inline_data usage with the following rule.
1. inline_data is set during the file creation.
2. If new data is requested to be written ranges out of inline_data,
f2fs converts that inode permanently.
3. There is no cases which converts non-inline_data inode to inline_data.
4. The inline_data flag should be changed under inode page lock.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
f2fs_write_begin() doesn't initialize the 'dn' variable if the inode has
inline data. However it uses its contents to decide whether it should
just zero out the page or load data to it. Thus if we are unlucky we can
zero out page contents instead of loading inline data into a page.
CC: stable@vger.kernel.org
CC: Changman Lee <cm224.lee@samsung.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
The sceanrio is like this.
inline_data i_size page write_begin/vm_page_mkwrite
X 30 dirty_page
X 30 write to #4096 position
X 30 get_dnode_of_data wait for get_dnode_of_data
O 30 write inline_data
O 30 get_dnode_of_data
O 30 reserve data block
..
In this case, we have #0 = NEW_ADDR and inline_data as well.
We should not allow this condition for further access.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
If user truncates file's data, we should truncate inmemory pages too.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
This patch let inmemory pages be clean all the time.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
This patch adds support for volatile writes which keep data pages in memory
until f2fs_evict_inode is called by iput.
For instance, we can use this feature for the sqlite database as follows.
While supporting atomic writes for main database file, we can keep its journal
data temporarily in the page cache by the following sequence.
1. open
-> ioctl(F2FS_IOC_START_VOLATILE_WRITE);
2. writes
: keep all the data in the page cache.
3. flush to the database file with atomic writes
a. ioctl(F2FS_IOC_START_ATOMIC_WRITE);
b. writes
c. ioctl(F2FS_IOC_COMMIT_ATOMIC_WRITE);
4. close
-> drop the cached data
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
This patch introduces a very limited functionality for atomic write support.
In order to support atomic write, this patch adds two ioctls:
o F2FS_IOC_START_ATOMIC_WRITE
o F2FS_IOC_COMMIT_ATOMIC_WRITE
The database engine should be aware of the following sequence.
1. open
-> ioctl(F2FS_IOC_START_ATOMIC_WRITE);
2. writes
: all the written data will be treated as atomic pages.
3. commit
-> ioctl(F2FS_IOC_COMMIT_ATOMIC_WRITE);
: this flushes all the data blocks to the disk, which will be shown all or
nothing by f2fs recovery procedure.
4. repeat to #2.
The IO pattens should be:
,- START_ATOMIC_WRITE ,- COMMIT_ATOMIC_WRITE
CP | D D D D D D | FSYNC | D D D D | FSYNC ...
`- COMMIT_ATOMIC_WRITE
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
Block size in f2fs is 4096 bytes, so theoretically, f2fs can support 4096 bytes
sector device at maximum. But now f2fs only support 512 bytes size sector, so
block device such as zRAM which uses page cache as its block storage space will
not be mounted successfully as mismatch between sector size of zRAM and sector
size of f2fs supported.
In this patch we support large sector size in f2fs, so block device with sector
size of 512/1024/2048/4096 bytes can be supported in f2fs.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
The f2fs_direct_IO uses __allocate_data_block, but inside the allocation path,
we should update i_size at the changed time to update its inode page.
Otherwise, we can get wrong i_size after roll-forward recovery.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
This patch cleans up a simple macro.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
This patch revisited whole the recovery information during the f2fs_sync_file.
In this patch, there are three information to make a decision.
a) IS_CHECKPOINTED, /* is it checkpointed before? */
b) HAS_FSYNCED_INODE, /* is the inode fsynced before? */
c) HAS_LAST_FSYNC, /* has the latest node fsync mark? */
And, the scenarios for our rule are based on:
[Term] F: fsync_mark, D: dentry_mark
1. inode(x) | CP | inode(x) | dnode(F)
2. inode(x) | CP | inode(F) | dnode(F)
3. inode(x) | CP | dnode(F) | inode(x) | inode(F)
4. inode(x) | CP | dnode(F) | inode(F)
5. CP | inode(x) | dnode(F) | inode(DF)
6. CP | inode(DF) | dnode(F)
7. CP | dnode(F) | inode(DF)
8. CP | dnode(F) | inode(x) | inode(DF)
For example, #3, the three conditions should be changed as follows.
inode(x) | CP | dnode(F) | inode(x) | inode(F)
a) x o o o o
b) x x x x o
c) x o o x o
If f2fs_sync_file stops ------^,
it should write inode(F) --------------^
So, the need_inode_block_update should return true, since
c) get_nat_flag(e, HAS_LAST_FSYNC), is false.
For example, #8,
CP | alloc | dnode(F) | inode(x) | inode(DF)
a) o x x x x
b) x x x o
c) o o x o
If f2fs_sync_file stops -------^,
it should write inode(DF) --------------^
Note that, the roll-forward policy should follow this rule, which means,
if there are any missing blocks, we doesn't need to recover that inode.
Signed-off-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
Previously f2fs only counts dirty dentry pages, but there is no reason not to
expand the scope.
This patch changes the names on the management of dirty pages and to count
dirty pages in each inode info as well.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
If any f2fs_bug_on is triggered, fsck.f2fs is needed.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|
|
This patch adds three inline functions to clean up dirty casting codes.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
|