On Mon, Nov 25, 2013 at 07:30:33PM +0800, Hans Zhang wrote:
> On 2013/11/25 18:23, Richard Genoud wrote:
> >
> > Well, yes, write through the char device would be a solution.
> >> But, *why* are you writing through mtdblock instead?
> >>
> >>> I think that maybe it's an optional approach through mtdblock in case we 
> >>> do not have
> >>> the mtd-tools in our environments, we do provider a simpler way to write 
> >>> the NAND
> >>> through mtdblock.
> >>>
> >> Uh? simpler? Writing through mtdchat is as simple as it gets:
> >>
> >>   $ cat some_file.img > /dev/mtd0
> >>
> >> Sorry, but I'm still confused at what are you trying to accomplish.
> > I think that what Hans wants to do is:
> >  $ cat some_file.img > /dev/mtd0
> > And that doesn't fail on a bad block but jumps over it.
> > ... Which is a bad idea.
> > But, likeyou, I didn't figured out why mtdblock instead of mtdchar.
> >
> >
> 
> I'm sorry it's my mistake, I thought the NAND need to be erased explicitly in 
> userspace
> before written when through the mtdchar device. That's why I use the mtdblock 
> instead of
> mtdchar.
> 

Your understanding is correct: NAND *must* be erased explictly in userspace
before writing. However, keep in mind the following additional constraints:

* Writing should be always performed using 'nandwrite',
  not tools such as 'cat' or 'dd'.

* An mtdblock shouldn't be used to access directly the NAND from
  userspace. AFAICS, the primarily usage of mtdblock is to be able to
  mount JFFS2.

Out of curiosity, what's your NAND layout? What FS are you using?
Unless you have some special requirement, you should be using UBI to
access the device (and not MTD).

Just a suggestion...
-- 
Ezequiel GarcĂ­a, Free Electrons
Embedded Linux, Kernel and Android Engineering
http://free-electrons.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to