Age | Commit message (Collapse) | Author |
|
UBI debugging functions were a little bit over-engineered and
returned more error codes than needed, and the callers had to
do useless checks. Simplify the return codes.
Impact: only debugging code is affected, which means that for
non-developers this is a no-op patch.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
The 'paranoid_check_empty()' is bogus because, which is easilly
seen on NOR flash, which has long erase cycles, and which may
easilly end-up with half-erased eraseblocks. In this case the
paranoid check fails. I is just wrong to assume that PEBs which
do not have EC headers always contain all 0xFF. Such assumption
should not be made on the I/O level, which is quite low.
Thus, just kill the check.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
This patch adds code which makes sure eraseblocks contain all 0xFF
bytes before starting using them. The verification is done only when
debugging checks are enabled.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
Some of the typos were indicated by Adrian Hunter,
some by 'aspell'.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
'kmem_cache_free()' oopeses if NULL is passed, and there is
one error-path place where UBI may call it with NULL object.
This problem was pointed to by Adrian Hunter.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
When marking a PEB as bad, print how many PEBs are left reserved.
This is very useful information.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
Print not only the PEB number, but also the LEB number and volume id,
which is very useful for bug hunting.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
This patch improves UBI errors handling. ATM UBI switches to
R/O mode when the WL worker fails to read the source PEB.
This means that the upper layers (e.g., UBIFS) has no
chances to unmap the erroneous PEB and fix the error.
This patch changes this behaviour and makes UBI put PEBs
like this into a separate RB-tree, thus preventing the
WL worker from hitting the same read errors again and
again.
But there is a 10% limit on a maximum amount of PEBs like this.
If there are too much of them, UBI switches to R/O mode.
Additionally, this patch teaches UBI not to panic and
switch to R/O mode if after a PEB has been copied, the
target LEB cannot be read back. Instead, now UBI cancels
the operation and schedules the target PEB for torturing.
The error paths has been tested by ingecting errors
into 'ubi_eba_copy_leb()'.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
This patch fixes the error path in the WL worker - in same cases
UBI oopses when 'goto out_error' happens and e1 or e2 are NULL.
This patch also cleans up the error paths a little. And I have
tested nearly all error paths in the WL worker.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
This patch is a clean-up and a preparation for the following
patches. It introduece constants for the return values of the
'ubi_eba_copy_leb()' function.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
- (better, more, bigger ...) then -> (...) than
Signed-off-by: Frederik Schwarzer <schwarzerf@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
|
|
UBI has 2 RB-trees to implement PEB protection, which is too
much for simply prevent PEB from being moved for some time.
This patch implements this using lists. The benefits:
1. No need to allocate protection entry on each PEB get.
2. No need to maintain balanced trees and walk them.
Signed-off-by: Xiaochuan-Xu <xiaochuan-xu@cqu.edu.cn>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
This patch modifies @struct ubi_wl_entry and adds union which
contains only one element so far. This is just a preparation
for further changes which will kill the protection tree and
make UBI use a list instead.
Signed-off-by: Xiaochuan-Xu <xiaochuan-xu@cqu.edu.cn>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
When a PEB is moved and a write error happens, UBI switches
to R/O mode, which is wrong, because we just copy the data
and may select a different PEB and re-try this. This patch
fixes WL worker's behavior.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
Make sure the resources had not already been freed before
freeing them in the error path of the WL worker function.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
Minor code re-structuring and commentaries fixes to improve readability.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
If ubi_thread() exits but kthread_should_stop() is not true
then kthread_stop() will never return and cleanup thread
will forever stay in "D" state.
Signed-off-by: Vitaliy Gusev <vgusev@openvz.org>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
No functional changes, just tweak comments to make kernel-doc
work fine and stop complaining.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
Just out or curiousity ran checkpatch.pl for whole UBI,
and discovered there are quite a few of stylistic issues.
Fix them.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
If bit-flips happen often, UBI prints to many messages. Lessen
the amount by only printing the messages when the PEB has been
scrubbed. Also, print torturing messages.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
Hch asked not to use "unit" for sub-systems, let it be so.
Also some other commentaries modifications.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
UBI already checks that @min io size is the power of 2 at io_init.
It is save to use bit operations then.
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
Old gcc complains:
CC drivers/mtd/ubi/wl.o
drivers/mtd/ubi/wl.c: In function 'wear_leveling_worker':
drivers/mtd/ubi/wl.c:746: warning: 'pe' may be used uninitialized in this function
CC drivers/mtd/ubi/scan.o
drivers/mtd/ubi/scan.c: In function 'ubi_scan':
drivers/mtd/ubi/scan.c:772: warning: 'ec' may be used uninitialized in this function
drivers/mtd/ubi/scan.c:772: note: 'ec' was declared here
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
The problem: NAND flashes have different amount of initial bad physical
eraseblocks (marked as bad by the manufacturer). For example, for 256MiB
Samsung OneNAND flash there might be from 0 to 40 bad initial eraseblocks,
which is about 2%. When UBI is used as the base system, one needs to know
the exact amount of good physical eraseblocks, because this number is
needed to create the UBI image which is put to the devices during
production. But this number is not know, which forces us to use the
minimum number of good physical eraseblocks. And UBI additionally
reserves some percentage of physical eraseblocks for bad block handling
(default is 1%), so we have 1-3% of PEBs reserved at the end, depending
on the amount of initial bad PEBs. But it is desired to always have
1% (or more, depending on the configuration).
Solution: this patch adds an "auto-resize" flag to the volume table.
The volume which has the "auto-resize" flag will automatically be re-sized
(enlarged) on the first UBI initialization. UBI clears the flag when
the volume is re-sized. Only one volume may have the "auto-resize" flag.
So, the production UBI image may have one volume with "auto-resize"
flag set, and its size is automatically adjusted on the first boot
of the device.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
Prepare the attach and detach functions to by used outside of
module initialization:
* detach function checks reference count before detaching
* it kills the background thread as well
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
This is one more step on the way to "removable" UBI devices. It
adds reference counting for UBI devices. Every time a volume on
this device is opened - the device's refcount is increased. It
is also increased if someone is reading any sysfs file of this
UBI device or of one of its volumes.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
The flush function should finish all the pending jobs. But if
somebody else is doing a work, this function should wait and let
it finish.
This patche uses rw semaphore for synchronization purpose - it
just looks quite convinient.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
When the WL worker is moving an LEB, the volume might go away
occasionally. UBI does not handle these situations correctly.
This patch introduces a new mutex which serializes wear-levelling
worker and the the 'ubi_wl_put_peb()' function. Now, if one puts
an LEB, and its PEB is being moved, it will wait on the mutex.
And because we unmap all LEBs when removing volumes, this will make
the volume remove function to wait while the LEB movement
finishes.
Below is an example of an oops which should be fixed by this patch:
Pid: 9167, comm: io_paral Not tainted (2.6.24-rc5-ubi-2.6.git #2)
EIP: 0060:[<f884a379>] EFLAGS: 00010246 CPU: 0
EIP is at prot_tree_del+0x2a/0x63 [ubi]
EAX: f39a90e0 EBX: 00000000 ECX: 00000000 EDX: 00000134
ESI: f39a90e0 EDI: f39a90e0 EBP: f2d55ddc ESP: f2d55dd4
DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068
Process io_paral (pid: 9167, ti=f2d54000 task=f72a8030 task.ti=f2d54000)
Stack: f39a95f8 ef6aae50 f2d55e08 f884a511 f88538e1 f884ecea 00000134 00000000
f39a9604 f39a95f0 efea8280 00000000 f39a90e0 f2d55e40 f8847261 f8850c3c
f884eaad 00000001 000000b9 00000134 00000172 000000b9 00000134 00000001
Call Trace:
[<c0105227>] show_trace_log_lvl+0x1a/0x30
[<c01052e2>] show_stack_log_lvl+0xa5/0xca
[<c01053d6>] show_registers+0xcf/0x21b
[<c0105648>] die+0x126/0x224
[<c0119a62>] do_page_fault+0x27f/0x60d
[<c037dd62>] error_code+0x72/0x78
[<f884a511>] ubi_wl_put_peb+0xf0/0x191 [ubi]
[<f8847261>] ubi_eba_unmap_leb+0xaf/0xcc [ubi]
[<f8843c21>] ubi_remove_volume+0x102/0x1e8 [ubi]
[<f8846077>] ubi_cdev_ioctl+0x22a/0x383 [ubi]
[<c017d768>] do_ioctl+0x68/0x71
[<c017d7c6>] vfs_ioctl+0x55/0x271
[<c017da15>] sys_ioctl+0x33/0x52
[<c0104152>] sysenter_past_esp+0x5f/0xa5
=======================
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
Explain better the purpose of thie 'move_to_put' stuff.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
Similarly to ltree_entry_slab, it makes more sense to create
and destroy ubi_wl_entry slab on module initialization/exit.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
The task_struct->pid member is going to be deprecated, so start
using the helpers (task_pid_nr/task_pid_vnr/task_pid_nr_ns) in
the kernel.
The first thing to start with is the pid, printed to dmesg - in
this case we may safely use task_pid_nr(). Besides, printks produce
more (much more) than a half of all the explicit pid usage.
[akpm@linux-foundation.org: git-drm went and changed lots of stuff]
Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
Cc: Dave Airlie <airlied@linux.ie>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
Similar reason as in case of the previous patch: it causes
deadlocks if a filesystem with writeback support works on top
of UBI. So pre-allocate needed buffers when attaching MTD device.
We also need mutexes to protect the buffers, but they do not
cause much contantion because they are used in recovery, torture,
and WL copy routines, which are called seldom.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
Use GFP_NOFS flag when allocating memory on I/O path, because otherwise
we may deadlock the filesystem which works on top of us. We observed
the deadlocks with UBIFS. Example:
VFS->FS lock a lock->UBI->kmalloc()->VFS writeback->FS locks the same
lock again.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
I hit those situations and found out lack of print messages. Add more prints
when erase problems occur.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
Slab destructors were no longer supported after Christoph's
c59def9f222d44bb7e2f0a559f2906191a0862d7 change. They've been
BUGs for both slab and slub, and slob never supported them
either.
This rips out support for the dtor pointer from kmem_cache_create()
completely and fixes up every single callsite in the kernel (there were
about 224, not including the slab allocator definitions themselves,
or the documentation references).
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Do not switch to read-only mode in case of -EINTR and some
other obvious cases. Switch to RO mode only when we do not
know what is the error.
Reported-by: Vinit Agnihotri <vinit.agnihotri@gmail.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
Kill UBI's homegrown endianess handling and replace it with
the standard kernel endianess handling.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
Currently, the freezer treats all tasks as freezable, except for the kernel
threads that explicitly set the PF_NOFREEZE flag for themselves. This
approach is problematic, since it requires every kernel thread to either
set PF_NOFREEZE explicitly, or call try_to_freeze(), even if it doesn't
care for the freezing of tasks at all.
It seems better to only require the kernel threads that want to or need to
be frozen to use some freezer-related code and to remove any
freezer-related code from the other (nonfreezable) kernel threads, which is
done in this patch.
The patch causes all kernel threads to be nonfreezable by default (ie. to
have PF_NOFREEZE set by default) and introduces the set_freezable()
function that should be called by the freezable kernel threads in order to
unset PF_NOFREEZE. It also makes all of the currently freezable kernel
threads call set_freezable(), so it shouldn't cause any (intentional)
change of behaviour to appear. Additionally, it updates documentation to
describe the freezing of tasks more accurately.
[akpm@linux-foundation.org: build fixes]
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Acked-by: Nigel Cunningham <nigel@nigel.suspend2.net>
Cc: Pavel Machek <pavel@ucw.cz>
Cc: Oleg Nesterov <oleg@tv-sign.ru>
Cc: Gautham R Shenoy <ego@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
UBI (Latin: "where?") manages multiple logical volumes on a single
flash device, specifically supporting NAND flash devices. UBI provides
a flexible partitioning concept which still allows for wear-levelling
across the whole flash device.
In a sense, UBI may be compared to the Logical Volume Manager
(LVM). Whereas LVM maps logical sector numbers to physical HDD sector
numbers, UBI maps logical eraseblocks to physical eraseblocks.
More information may be found at
http://www.linux-mtd.infradead.org/doc/ubi.html
Partitioning/Re-partitioning
An UBI volume occupies a certain number of erase blocks. This is
limited by a configured maximum volume size, which could also be
viewed as the partition size. Each individual UBI volume's size can
be changed independently of the other UBI volumes, provided that the
sum of all volume sizes doesn't exceed a certain limit.
UBI supports dynamic volumes and static volumes. Static volumes are
read-only and their contents are protected by CRC check sums.
Bad eraseblocks handling
UBI transparently handles bad eraseblocks. When a physical
eraseblock becomes bad, it is substituted by a good physical
eraseblock, and the user does not even notice this.
Scrubbing
On a NAND flash bit flips can occur on any write operation,
sometimes also on read. If bit flips persist on the device, at first
they can still be corrected by ECC, but once they accumulate,
correction will become impossible. Thus it is best to actively scrub
the affected eraseblock, by first copying it to a free eraseblock
and then erasing the original. The UBI layer performs this type of
scrubbing under the covers, transparently to the UBI volume users.
Erase Counts
UBI maintains an erase count header per eraseblock. This frees
higher-level layers (like file systems) from doing this and allows
for centralized erase count management instead. The erase counts are
used by the wear-levelling algorithm in the UBI layer. The algorithm
itself is exchangeable.
Booting from NAND
For booting directly from NAND flash the hardware must at least be
capable of fetching and executing a small portion of the NAND
flash. Some NAND flash controllers have this kind of support. They
usually limit the window to a few kilobytes in erase block 0. This
"initial program loader" (IPL) must then contain sufficient logic to
load and execute the next boot phase.
Due to bad eraseblocks, which may be randomly scattered over the
flash device, it is problematic to store the "secondary program
loader" (SPL) statically. Also, due to bit-flips it may become
corrupted over time. UBI allows to solve this problem gracefully by
storing the SPL in a small static UBI volume.
UBI volumes vs. static partitions
UBI volumes are still very similar to static MTD partitions:
* both consist of eraseblocks (logical eraseblocks in case of UBI
volumes, and physical eraseblocks in case of static partitions;
* both support three basic operations - read, write, erase.
But UBI volumes have the following advantages over traditional
static MTD partitions:
* there are no eraseblock wear-leveling constraints in case of UBI
volumes, so the user should not care about this;
* there are no bit-flips and bad eraseblocks in case of UBI volumes.
So, UBI volumes may be considered as flash devices with relaxed
restrictions.
Where can it be found?
Documentation, kernel code and applications can be found in the MTD
gits.
What are the applications for?
The applications help to create binary flash images for two purposes: pfi
files (partial flash images) for in-system update of UBI volumes, and plain
binary images, with or without OOB data in case of NAND, for a manufacturing
step. Furthermore some tools are/and will be created that allow flash content
analysis after a system has crashed..
Who did UBI?
The original ideas, where UBI is based on, were developed by Andreas
Arnez, Frank Haverkamp and Thomas Gleixner. Josh W. Boyer and some others
were involved too. The implementation of the kernel layer was done by Artem
B. Bityutskiy. The user-space applications and tools were written by Oliver
Lohmann with contributions from Frank Haverkamp, Andreas Arnez, and Artem.
Joern Engel contributed a patch which modifies JFFS2 so that it can be run on
a UBI volume. Thomas Gleixner did modifications to the NAND layer. Alexander
Schmidt made some testing work as well as core functionality improvements.
Signed-off-by: Artem B. Bityutskiy <dedekind@linutronix.de>
Signed-off-by: Frank Haverkamp <haver@vnet.ibm.com>
|