Intermec ck1 参照ガイド
Appendix A —
µClinux System
318
CK1 SDK Programmer’s Reference Manual
The storing of data goes strictly linearly through the whole storage. The
data can easily be appended until it reaches the end of the media. After this
the data can only be written to the part that already contains this dirty
space and which are allowed to be overwritten. At this time the garbage
collection is trigged by the threshold to make some free space if possible.
The triggering happens either by the means of a kernel thread or when the
process attempts to write to the storage and finds that it is out of free
space. The collector proceeds linearly from the head node of the Flash
toward the tail. Blocks from the head are copied to the tail and after wards
erased. The free space that may occur from this process is again marked to
be clean. The head tag is then moved to the next reserved block and the
writing can again start from the beginning of the Flash. The idea is that
every block is erased and written at the same moment, which keeps the
wear on the Flash perfectly level. The only problem with this method is
that some unnecessary erasing and writing is done that uses the Flash. This
and some other limitations were the key features that started a new project
to develop a more evolved version of the file system.
data can easily be appended until it reaches the end of the media. After this
the data can only be written to the part that already contains this dirty
space and which are allowed to be overwritten. At this time the garbage
collection is trigged by the threshold to make some free space if possible.
The triggering happens either by the means of a kernel thread or when the
process attempts to write to the storage and finds that it is out of free
space. The collector proceeds linearly from the head node of the Flash
toward the tail. Blocks from the head are copied to the tail and after wards
erased. The free space that may occur from this process is again marked to
be clean. The head tag is then moved to the next reserved block and the
writing can again start from the beginning of the Flash. The idea is that
every block is erased and written at the same moment, which keeps the
wear on the Flash perfectly level. The only problem with this method is
that some unnecessary erasing and writing is done that uses the Flash. This
and some other limitations were the key features that started a new project
to develop a more evolved version of the file system.
The journaling Flash file system got a new revision based on the design
concepts of the first version. The JFFS2 was developed by RedHat, and it
started with a project including a compression of JFFS, but because of
some limitations of the first version, it was decided that the code would be
re-written to overcome all the problems facing the first release. The
development was launched to eCos embedded operating system, so for this
reason it was released with dual licenses: GPL and Red Hat eCos Public
License. Officially, it was included to 2.4.10 series of kernel.
concepts of the first version. The JFFS2 was developed by RedHat, and it
started with a project including a compression of JFFS, but because of
some limitations of the first version, it was decided that the code would be
re-written to overcome all the problems facing the first release. The
development was launched to eCos embedded operating system, so for this
reason it was released with dual licenses: GPL and Red Hat eCos Public
License. Officially, it was included to 2.4.10 series of kernel.
The compression in JFFS2 is based quick compression algorithm on zlib
based file compression that compresses all the files before it is written to
storage. When data is written from the storage, it is decompressed on the
fly so that the whole process is invisible for the end user. The second
version also provides a more efficient, non-sequential garbage collection. It
treaded all the blocks individually allowing the garbage collector to make a
decision which block will be erased next. The erase blocks in the log
structure is stored in one of the data structures depending on the current
contents of the block. To determine which block is to be erased next based
on the jiffies counter, the counter is based on formula jiffies%100, where
the non-zero number indicates that the block is erased from dirty_list, and
the remaining 1 in 100 times the pick is done from the clean_list
containing only valid nodes. This ensures that the data is also moved
around the media and wear leveling is achieved.
based file compression that compresses all the files before it is written to
storage. When data is written from the storage, it is decompressed on the
fly so that the whole process is invisible for the end user. The second
version also provides a more efficient, non-sequential garbage collection. It
treaded all the blocks individually allowing the garbage collector to make a
decision which block will be erased next. The erase blocks in the log
structure is stored in one of the data structures depending on the current
contents of the block. To determine which block is to be erased next based
on the jiffies counter, the counter is based on formula jiffies%100, where
the non-zero number indicates that the block is erased from dirty_list, and
the remaining 1 in 100 times the pick is done from the clean_list
containing only valid nodes. This ensures that the data is also moved
around the media and wear leveling is achieved.
With both JFFS and JFFS2, one major flaw is the amount of space that is
required by the garbage collector. At the time, five full erase blocks are
required in order to perform a new write for the user space. The
compression that the JFFS2 uses also causes unnecessary overhead to files
that are already compressed.
required by the garbage collector. At the time, five full erase blocks are
required in order to perform a new write for the user space. The
compression that the JFFS2 uses also causes unnecessary overhead to files
that are already compressed.