Re: Getting rid of alignment faults in userspace

2011-06-18 Thread Arnd Bergmann
On Friday 17 June 2011, Nicolas Pitre wrote:
> On Fri, 17 Jun 2011, Arnd Bergmann wrote:
> > On Friday 17 June 2011 14:10:11 Dave Martin wrote:
> > 
> > > As part of the general effort to make open source on ARM better, I think 
> > > it would be great if we can disable the alignment fixups (or at least
> > > enable logging) and work with upstreams to get the affected packages
> > > fixed.

> The only effective rate limiting configuration I would recommend is to 
> SIGBUS misaligned accesses by default.  And that's also supported 
> already with the right flag.
> 

So should we change the default in the prerelease kernels to enable SIGBUS?
The immediate result of that would be to break firefox, which would
cause a lot of questions on the mailing list.

Arnd

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Getting rid of alignment faults in userspace

2011-06-18 Thread Paul Brook
> > >   char buf[8];
> > >   void *v = &buf[1];
> > >   unsigned int *p = (unsigned int *)v;
> > 
> > This does not (reliably) do what you expect.  The compiler need not align
> > buf.
> 
> Printing the value of p should clarify this.
> 
> And, as we can see above, the "simple" accesses are left to the hardware
> to fix up.  However, if the misaligned access is performed using a
> 64-bit value pointer, then the kernel will trap an exception and the
> access will be simulated.

I think you've missed my point.  gcc may (though unlikely in this case) choose 
to place buf at an odd address.  In which case p will happen to be properly 
aligned.

I'm not sure where you get "64-bit value pointer" from.  *p is only a word 
sized access, and memcpy is defined in terms of bytes so will only be promoted 
to wider accesses when the compiler believes it is safe.

Paul

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: LAVA Dashboard user forum

2011-06-18 Thread Zygmunt Krynicki

W dniu 17.06.2011 23:50, Zach Pfeffer pisze:

On 17 June 2011 11:45, Zygmunt Krynicki  wrote:

W dniu 17.06.2011 18:24, Zach Pfeffer pisze:


One thing I'd like to see ASAP with LAVA is to list the build that was
tested on the test results page.


It is listed, unless I'm mistaken:

http://validation.linaro.org/launch-control/dashboard/test-results/399109/

See the "Android.URL" below.


Hey cool. Now if I can search and see all the test runs.


I can make a report that will list all for you with links to test 
results. Let's do that on Monday.



Ideally each build would trigger a test to be scheduled, results of each
test should be uploaded to the dashboard. This way we could retain identity
in all participating components.


Yeah. Perhaps the android-build should just kick the test off. Is
there a simple API to request this? Something like:

test(build_url, target, type)


Michael is working on the scheduler. He recently added API for 
submitting a job to the system. It's still some time before this can be 
used in production though. AFAIR there is no code that would take jobs 
and actually ask dispatchers to run them.



...and if the target it busy, we just skip the test and provide a link
on the android-build site that lets you re-request? I'll file a bp for
this.


The scheduler will queue jobs. Once a dispatcher is idle it will simply 
run another test.


Thanks
ZK

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: LAVA Dashboard user forum

2011-06-18 Thread Zach Pfeffer
On 18 June 2011 04:07, Zygmunt Krynicki  wrote:
> W dniu 17.06.2011 23:50, Zach Pfeffer pisze:
>>
>> On 17 June 2011 11:45, Zygmunt Krynicki
>>  wrote:
>>>
>>> W dniu 17.06.2011 18:24, Zach Pfeffer pisze:

 One thing I'd like to see ASAP with LAVA is to list the build that was
 tested on the test results page.
>>>
>>> It is listed, unless I'm mistaken:
>>>
>>>
>>> http://validation.linaro.org/launch-control/dashboard/test-results/399109/
>>>
>>> See the "Android.URL" below.
>>
>> Hey cool. Now if I can search and see all the test runs.
>
> I can make a report that will list all for you with links to test results.
> Let's do that on Monday.

Cool. Thanks.

>>> Ideally each build would trigger a test to be scheduled, results of each
>>> test should be uploaded to the dashboard. This way we could retain
>>> identity
>>> in all participating components.
>>
>> Yeah. Perhaps the android-build should just kick the test off. Is
>> there a simple API to request this? Something like:
>>
>> test(build_url, target, type)
>
> Michael is working on the scheduler. He recently added API for submitting a
> job to the system. It's still some time before this can be used in
> production though. AFAIR there is no code that would take jobs and actually
> ask dispatchers to run them.

Since we're got such low usage at the moment, is there any way we can
start to use it without a full scheduler in place? For instance, Paul
had mentioned that we could email him and he would kick off jobs.
Perhaps we could just agree on times that android-build could request
tests? Do a manual schedule?

>> ...and if the target it busy, we just skip the test and provide a link
>> on the android-build site that lets you re-request? I'll file a bp for
>> this.
>
> The scheduler will queue jobs. Once a dispatcher is idle it will simply run
> another test.

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Getting rid of alignment faults in userspace

2011-06-18 Thread Steve Langasek
On Fri, Jun 17, 2011 at 01:10:11PM +0100, Dave Martin wrote:

> For ARM, we can achieve the goal by augmenting the default kernel command-
> line options: either

> alignment=3
> Fix up each alingment fault, but also log the faulting address
> and name of the offending process to dmesg.

> alignment=5
> Pass each alignment fault to the user process as SIGBUS (fatal
> by default) and log the faulting address and name of the
> offending process to dmesg.

> Fault statistics cat also be obtained at runtime by reading
> /proc/cpu/alignment.

> For other architectures, there may be other arch-specific ways of
> achieving something similar.

Other architectures[1] use the 'prctl' tool, which uses the
prctl(PR_SET_UNALIGN,...) kernel interface to control the unaligned trap
behavior for the process.  If this can be sanely togglable on ARM at
runtime, it would be keen to use the same interface on this arch.

HTH,
-- 
Steve Langasek   Give me a lever long enough and a Free OS
Debian Developer   to set it on, and I can move the world.
Ubuntu Developerhttp://www.debian.org/
slanga...@ubuntu.com vor...@debian.org

[1] Originally ia64; historically ported to hppa and alpha; currently
available in Debian unstable for ia64 and powerpc


signature.asc
Description: Digital signature
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Getting rid of alignment faults in userspace

2011-06-18 Thread Andy Green

On 06/17/2011 11:53 PM, Somebody in the thread at some point said:

Hi -


int main(int argc, char * argv[])
{

   char buf[8];
   void *v =&buf[1];
   unsigned int *p = (unsigned int *)v;


This does not (reliably) do what you expect.  The compiler need not align buf.


What?  Somebody complaining my code does not blow enough faults and 
exceptions? ^^


If I retry the same test with this, which is definitely proof against 
such doubts -->



#include 
#include 

int main(int argc, char * argv[])
{
 char buf[8];
 void *v = &buf[1];
 void *v1 = &buf[2];
 unsigned int *p = (unsigned int *)v;
 unsigned int *p1 = (unsigned int *)v1;

 strcpy(buf, "abcdefg");

 printf("0x%08x\n", *p);
 printf("0x%08x\n", *p1);

 return 0;
}

I get

root@linaro:~# echo 2 > /proc/cpu/alignment
root@linaro:~# ./a.out
0x65646362
0x66656463
root@linaro:~# echo 0 > /proc/cpu/alignment
root@linaro:~# ./a.out
0x65646362
0x66656463

ie, it is still always fixed up.

Let's not lose sight of the point of the thread, Dave Martin wants to 
root out remaining alignment faults in userland which is a great idea, I 
was warning him depending on what he tests on, eg, Panda, by default he 
won't be able to see any alignment faults in the first place in the soft 
fixup code that allows him to get a signal and find the bad code in gdb. 
 And this code does prove that to be the case.


-Andy

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Getting rid of alignment faults in userspace

2011-06-18 Thread Nicolas Pitre
On Sat, 18 Jun 2011, Paul Brook wrote:

> > > >   char buf[8];
> > > >   void *v = &buf[1];
> > > >   unsigned int *p = (unsigned int *)v;
> > > 
> > > This does not (reliably) do what you expect.  The compiler need not align
> > > buf.
> > 
> > Printing the value of p should clarify this.
> > 
> > And, as we can see above, the "simple" accesses are left to the hardware
> > to fix up.  However, if the misaligned access is performed using a
> > 64-bit value pointer, then the kernel will trap an exception and the
> > access will be simulated.
> 
> I think you've missed my point.  gcc may (though unlikely in this case) 
> choose 
> to place buf at an odd address.  In which case p will happen to be properly 
> aligned.

Sorry for being too vague.

My point is to print the value of p i.e. the actual address used to 
perform the access, which would confirm that the access is truly 
misaligned or not.  That won't force any particular alignment on the 
buffer obviously, but at least this would clear any doubt as to the 
validity of the test.

> I'm not sure where you get "64-bit value pointer" from.  *p is only a word 
> sized access, and memcpy is defined in terms of bytes so will only be 
> promoted 
> to wider accesses when the compiler believes it is safe.

Again I probably was too vague.  So let me provide the actual code 
modified from Andy's expressing what I mean:

int main(int argc, char * argv[])
{
 char buf[8];
 void *v = &buf[1];
 unsigned int *p = (unsigned int *)v;

 strcpy(buf, "abcdefg");

 printf("*%p = 0x%08x\n", p, *p);

 return 0;
}

That's the original, modified to print the actual address used, which 
should confirm there is actually a misaligned access performed.  And in 
this case, confirmed by the kernel code I quoted previously, the 
hardware will perform the misaligned access by itself on ARMv6 and 
above.

Now, if we use this code instead:

int main(int argc, char * argv[])
{
 char buf[8];
 void *v = &buf[1];
 unsigned long long *p = (unsigned long long *)v;

 strcpy(buf, "abcdefg");

 printf("*p = 0x%016x\n", p, *p);

 return 0;
}

In this case the kernel alignment trap will be involved, and the stats 
in /proc/cpu/alignment will increase, as the hardware won't perform the 
access automatically here.

In both cases the result would be what people expects, although the 
second case will be far more expensive.


Nicolas

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Getting rid of alignment faults in userspace

2011-06-18 Thread Nicolas Pitre
On Sat, 18 Jun 2011, Arnd Bergmann wrote:

> On Friday 17 June 2011, Nicolas Pitre wrote:
> > On Fri, 17 Jun 2011, Arnd Bergmann wrote:
> > > On Friday 17 June 2011 14:10:11 Dave Martin wrote:
> > > 
> > > > As part of the general effort to make open source on ARM better, I 
> > > > think 
> > > > it would be great if we can disable the alignment fixups (or at least
> > > > enable logging) and work with upstreams to get the affected packages
> > > > fixed.
> 
> > The only effective rate limiting configuration I would recommend is to 
> > SIGBUS misaligned accesses by default.  And that's also supported 
> > already with the right flag.
> > 
> 
> So should we change the default in the prerelease kernels to enable SIGBUS?
> The immediate result of that would be to break firefox, which would
> cause a lot of questions on the mailing list.

Only if we really plan on fixing Firefox, and upstream is 
interested in accepting the fix.  Otherwise there is no point, 
especially when it is possible for those actually interested in this 
issue to change the misaligned access behavior at run time for 
themselves.


Nicolas

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Getting rid of alignment faults in userspace

2011-06-18 Thread Rtp
Dave Martin  writes:
Hi,

> Hi all,
>
> I've recently become aware that a few packages are causing alignment
> faults on ARM, and are relying on the alignment fixup emulation code in
> the kernel in order to work.
>
> Such faults are very expensive in terms of CPU cycles, and can generally
> only result from wrong code (for example, C/C++ code which violates the
> relevant language standards, assembler which makes invalid assumptions,
> or functions called with misaligned pointers due to other bugs).
>
> Currently, on a natty Ubuntu desktop image I observe no faults except
> from firefox and mono-based apps (see below).
>
> As part of the general effort to make open source on ARM better, I think 
> it would be great if we can disable the alignment fixups (or at least
> enable logging) and work with upstreams to get the affected packages
> fixed.
>
> For release images we might want to be more forgiving, but for development
> we have the option of being more aggressive.
>
> The number of affected packages and bugs appears small enough for the
> fixing effort to be feasible, without temporarily breaking whole
> distros.
>
>
> For ARM, we can achieve the goal by augmenting the default kernel command-
> line options: either
>
> alignment=3
> Fix up each alingment fault, but also log the faulting address
> and name of the offending process to dmesg.
>
> alignment=5
> Pass each alignment fault to the user process as SIGBUS (fatal
> by default) and log the faulting address and name of the
> offending process to dmesg.

iirc, someone sent some months/years ago a patch to change the default
but it has been rejected because there are (was ?) some libc including
glibc doing some unaligned access [1], and this can happen early in the
boot process. In this kind of case, things like getting a sigbus would
hurt.

Also, as noted by someone else in the thread, you do want to test on
something like armv5* or v4* because there are high chances than the
trap used by the alignment fix won't be triggered at all on >= armv6.

Arnaud

[1] See commit log of commit d944d549aa86e08cba080396513234cf048fee1f.

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Getting rid of alignment faults in userspace

2011-06-18 Thread Nicolas Pitre
On Sat, 18 Jun 2011, Nicolas Pitre wrote:

> int main(int argc, char * argv[])
> {
>  char buf[8];
>  void *v = &buf[1];
>  unsigned int *p = (unsigned int *)v;
> 
>  strcpy(buf, "abcdefg");
> 
>  printf("*%p = 0x%08x\n", p, *p);
> 
>  return 0;
> }

Obviously, there is a buffer overflow here, so the buf array should be 
enlarged.


Nicolas

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Getting rid of alignment faults in userspace

2011-06-18 Thread Nicolas Pitre
On Sat, 18 Jun 2011, Arnaud Patard wrote:

> Dave Martin  writes:
> Hi,
> 
> > Hi all,
> >
> > I've recently become aware that a few packages are causing alignment
> > faults on ARM, and are relying on the alignment fixup emulation code in
> > the kernel in order to work.
> >
> > Such faults are very expensive in terms of CPU cycles, and can generally
> > only result from wrong code (for example, C/C++ code which violates the
> > relevant language standards, assembler which makes invalid assumptions,
> > or functions called with misaligned pointers due to other bugs).
> >
> > Currently, on a natty Ubuntu desktop image I observe no faults except
> > from firefox and mono-based apps (see below).
> >
> > As part of the general effort to make open source on ARM better, I think 
> > it would be great if we can disable the alignment fixups (or at least
> > enable logging) and work with upstreams to get the affected packages
> > fixed.
> >
> > For release images we might want to be more forgiving, but for development
> > we have the option of being more aggressive.
> >
> > The number of affected packages and bugs appears small enough for the
> > fixing effort to be feasible, without temporarily breaking whole
> > distros.
> >
> >
> > For ARM, we can achieve the goal by augmenting the default kernel command-
> > line options: either
> >
> > alignment=3
> > Fix up each alingment fault, but also log the faulting address
> > and name of the offending process to dmesg.
> >
> > alignment=5
> > Pass each alignment fault to the user process as SIGBUS (fatal
> > by default) and log the faulting address and name of the
> > offending process to dmesg.
> 
> iirc, someone sent some months/years ago a patch to change the default

That was me.

> but it has been rejected because there are (was ?) some libc including
> glibc doing some unaligned access [1], and this can happen early in the
> boot process. In this kind of case, things like getting a sigbus would
> hurt.

This is only partly true.

Rewind about 15 years ago when all that Linux supported was ARMv3.  On 
ARMv3 there is no instruction for doing half-word loads/stores, and no 
instruction to sign extend a loaded byte.

In those days, the compiler was relying on a documented and 
architecturally defined behavior of misaligned loads/stores which is to 
rotate the bytes comprising the otherwise aligned word, the rotation 
position being defined by the sub-word offset.  Doing so allowed for 
certain optimizations to avoid extra shifts and masks.

Then a bunch of binaries were built with a version of GCC making use of 
those misaligned access tricks.

Then came along ARMv4 with its LDRH, LDRSH, and LDRSB instructions, 
making those misaligned tricks unnecessary.  Hence GCC deprecated those 
optimizations.  Today only the old farts amongst us still remember about 
this.

So for quite a while now, having a misaligned access on ARM before ARMv6 
is quite likely to not produce the commonly expected result.  That's why 
there is code in the kernel to trap and fix up misaligned accesses.  
However, it is turned off by default for user space.  Why?

Turns out that a prominent ARM developer still has binaries from the 
ARMv3 era around, and the default of not fixing up misaligned user space 
accesses is for remaining compatible with them.

So if you do have a version of glibc that is not from 15 years ago (that 
would have to be a.out and not ELF if it was) then you do not want to 
let misaligned accesses go through unfixed, otherwise you'll simply have 
latent data corruption somewhere.

> Also, as noted by someone else in the thread, you do want to test on
> something like armv5* or v4* because there are high chances than the
> trap used by the alignment fix won't be triggered at all on >= armv6.

Given that Linaro is working only with Thumb2-compiled  user space, that 
implies ARMv6 and above only.

> [1] See commit log of commit d944d549aa86e08cba080396513234cf048fee1f.

And note the "if not fixed up, results in segfaults" in that log, 
meaning that the current default is wrong for that case.


Nicolas

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


[PATCH v5 00/12] mmc: use nonblock mmc requests to minimize latency

2011-06-18 Thread Per Forlin
How significant is the cache maintenance over head?
It depends, the eMMC are much faster now
compared to a few years ago and cache maintenance cost more due to
multiple cache levels and speculative cache pre-fetch. In relation the
cost for handling the caches have increased and is now a bottle neck
dealing with fast eMMC together with DMA.

The intention for introducing none blocking mmc requests is to minimize the
time between a mmc request ends and another mmc request starts. In the
current implementation the MMC controller is idle when dma_map_sg and
dma_unmap_sg is processing. Introducing none blocking mmc request makes it
possible to prepare the caches for next job parallel with an active
mmc request.

This is done by making the issue_rw_rq() none blocking.
The increase in throughput is proportional to the time it takes to
prepare (major part of preparations is dma_map_sg and dma_unmap_sg)
a request and how fast the memory is. The faster the MMC/SD is
the more significant the prepare request time becomes. Measurements on U5500
and Panda on eMMC and SD shows significant performance gain for large
reads when running DMA mode. In the PIO case the performance is unchanged.

There are two optional hooks pre_req() and post_req() that the host driver
may implement in order to move work to before and after the actual mmc_request
function is called. In the DMA case pre_req() may do dma_map_sg() and prepare
the dma descriptor and post_req runs the dma_unmap_sg.

Details on measurements from IOZone and mmc_test:
https://wiki.linaro.org/WorkingGroups/Kernel/Specs/StoragePerfMMC-async-req

Changes since v4:
 * rebase on top of linux 3.0

Per Forlin (12):
  mmc: add none blocking mmc request function
  omap_hsmmc: use original sg_len for dma_unmap_sg
  omap_hsmmc: add support for pre_req and post_req
  mmci: implement pre_req() and post_req()
  mmc: mmc_test: add debugfs file to list all tests
  mmc: mmc_test: add test for none blocking transfers
  mmc: add member in mmc queue struct to hold request data
  mmc: add a block request prepare function
  mmc: move error code in mmc_block_issue_rw_rq to a separate function.
  mmc: add a second mmc queue request member
  mmc: test: add random fault injection in core.c
  mmc: add handling for two parallel block requests in issue_rw_rq

 drivers/mmc/card/block.c  |  534 -
 drivers/mmc/card/mmc_test.c   |  361 +++-
 drivers/mmc/card/queue.c  |  184 +-
 drivers/mmc/card/queue.h  |   33 ++-
 drivers/mmc/core/core.c   |  165 -
 drivers/mmc/core/debugfs.c|5 +
 drivers/mmc/host/mmci.c   |  146 ++-
 drivers/mmc/host/mmci.h   |8 +
 drivers/mmc/host/omap_hsmmc.c |   90 +++-
 include/linux/mmc/core.h  |6 +-
 include/linux/mmc/host.h  |   24 ++
 lib/Kconfig.debug |   11 +
 12 files changed, 1237 insertions(+), 330 deletions(-)

-- 
1.7.4.1


___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


[PATCH v5 01/12] mmc: add none blocking mmc request function

2011-06-18 Thread Per Forlin
Previously there has only been one function mmc_wait_for_req()
to start and wait for a request. This patch adds
 * mmc_start_req() - starts a request wihtout waiting
   If there is on ongoing request wait for completion
   of that request and start the new one and return.
   Does not wait for the new command to complete.

This patch also adds new function members in struct mmc_host_ops
only called from core.c
 * pre_req - asks the host driver to prepare for the next job
 * post_req - asks the host driver to clean up after a completed job

The intention is to use pre_req() and post_req() to do cache maintenance
while a request is active. pre_req() can be called while a request is active
to minimize latency to start next job. post_req() can be used after the next
job is started to clean up the request. This will minimize the host driver
request end latency. post_req() is typically used before ending the block
request and handing over the buffer to the block layer.

Add a host-private member in mmc_data to be used by
pre_req to mark the data. The host driver will then
check this mark to see if the data is prepared or not.

Signed-off-by: Per Forlin 
---
 drivers/mmc/core/core.c  |  111 +
 include/linux/mmc/core.h |6 ++-
 include/linux/mmc/host.h |   21 +
 3 files changed, 127 insertions(+), 11 deletions(-)

diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index 68091dd..3538166 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -198,10 +198,108 @@ mmc_start_request(struct mmc_host *host, struct 
mmc_request *mrq)
 
 static void mmc_wait_done(struct mmc_request *mrq)
 {
-   complete(mrq->done_data);
+   complete(&mrq->completion);
+}
+
+static void __mmc_start_req(struct mmc_host *host, struct mmc_request *mrq)
+{
+   init_completion(&mrq->completion);
+   mrq->done = mmc_wait_done;
+   mmc_start_request(host, mrq);
+}
+
+static void mmc_wait_for_req_done(struct mmc_host *host,
+ struct mmc_request *mrq)
+{
+   wait_for_completion(&mrq->completion);
+}
+
+/**
+ * mmc_pre_req - Prepare for a new request
+ * @host: MMC host to prepare command
+ * @mrq: MMC request to prepare for
+ * @is_first_req: true if there is no previous started request
+ * that may run in parellel to this call, otherwise false
+ *
+ * mmc_pre_req() is called in prior to mmc_start_req() to let
+ * host prepare for the new request. Preparation of a request may be
+ * performed while another request is running on the host.
+ */
+static void mmc_pre_req(struct mmc_host *host, struct mmc_request *mrq,
+bool is_first_req)
+{
+   if (host->ops->pre_req)
+   host->ops->pre_req(host, mrq, is_first_req);
 }
 
 /**
+ * mmc_post_req - Post process a completed request
+ * @host: MMC host to post process command
+ * @mrq: MMC request to post process for
+ * @err: Error, if none zero, clean up any resources made in pre_req
+ *
+ * Let the host post process a completed request. Post processing of
+ * a request may be performed while another reuqest is running.
+ */
+static void mmc_post_req(struct mmc_host *host, struct mmc_request *mrq,
+int err)
+{
+   if (host->ops->post_req)
+   host->ops->post_req(host, mrq, err);
+}
+
+/**
+ * mmc_start_req - start a none blocking request
+ * @host: MMC host to start command
+ * @areq: async request to start
+ * @error: none zero in case of error
+ *
+ * Start a new MMC custom command request for a host.
+ * If there is on ongoing async request wait for completion
+ * of that request and start the new one and return.
+ * Does not wait for the new request to complete.
+ *
+ * Returns the completed async request, NULL in case of none completed.
+ */
+struct mmc_async_req *mmc_start_req(struct mmc_host *host,
+   struct mmc_async_req *areq, int *error)
+{
+   int err = 0;
+   struct mmc_async_req *data = host->areq;
+
+   /* Prepare a new request */
+   if (areq)
+   mmc_pre_req(host, areq->mrq, !host->areq);
+
+   if (host->areq) {
+   mmc_wait_for_req_done(host, host->areq->mrq);
+   err = host->areq->err_check(host->card, host->areq);
+   if (err) {
+   mmc_post_req(host, host->areq->mrq, 0);
+   if (areq)
+   mmc_post_req(host, areq->mrq, -EINVAL);
+
+   host->areq = NULL;
+   if (error)
+   *error = err;
+   return data;
+   }
+   }
+
+   if (areq)
+   __mmc_start_req(host, areq->mrq);
+
+   if (host->areq)
+   mmc_post_req(host, host->areq->mrq, 0);
+
+   host->areq = areq;
+   if (error)

[PATCH v5 02/12] omap_hsmmc: use original sg_len for dma_unmap_sg

2011-06-18 Thread Per Forlin
Don't use the returned sg_len from dma_map_sg() as inparameter
to dma_unmap_sg(). Use the original sg_len for both dma_map_sg
and dma_unmap_sg according to the documentation in DMA-API.txt.

Signed-off-by: Per Forlin 
Reviewed-by: Venkatraman S 
---
 drivers/mmc/host/omap_hsmmc.c |5 +++--
 1 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/mmc/host/omap_hsmmc.c b/drivers/mmc/host/omap_hsmmc.c
index 5b2e215..abea10f 100644
--- a/drivers/mmc/host/omap_hsmmc.c
+++ b/drivers/mmc/host/omap_hsmmc.c
@@ -962,7 +962,8 @@ static void omap_hsmmc_dma_cleanup(struct omap_hsmmc_host 
*host, int errno)
spin_unlock(&host->irq_lock);
 
if (host->use_dma && dma_ch != -1) {
-   dma_unmap_sg(mmc_dev(host->mmc), host->data->sg, host->dma_len,
+   dma_unmap_sg(mmc_dev(host->mmc), host->data->sg,
+   host->data->sg_len,
omap_hsmmc_get_dma_dir(host, host->data));
omap_free_dma(dma_ch);
}
@@ -1346,7 +1347,7 @@ static void omap_hsmmc_dma_cb(int lch, u16 ch_status, 
void *cb_data)
return;
}
 
-   dma_unmap_sg(mmc_dev(host->mmc), data->sg, host->dma_len,
+   dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len,
omap_hsmmc_get_dma_dir(host, data));
 
req_in_progress = host->req_in_progress;
-- 
1.7.4.1


___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


[PATCH v5 03/12] omap_hsmmc: add support for pre_req and post_req

2011-06-18 Thread Per Forlin
pre_req() runs dma_map_sg(), post_req() runs dma_unmap_sg.
If not calling pre_req() before omap_hsmmc_request()
dma_map_sg will be issued before starting the transfer.
It is optional to use pre_req(). If issuing pre_req()
post_req() must be to be called as well.

Signed-off-by: Per Forlin 
---
 drivers/mmc/host/omap_hsmmc.c |   87 +++--
 1 files changed, 83 insertions(+), 4 deletions(-)

diff --git a/drivers/mmc/host/omap_hsmmc.c b/drivers/mmc/host/omap_hsmmc.c
index abea10f..b593301 100644
--- a/drivers/mmc/host/omap_hsmmc.c
+++ b/drivers/mmc/host/omap_hsmmc.c
@@ -141,6 +141,11 @@
 #define OMAP_HSMMC_WRITE(base, reg, val) \
__raw_writel((val), (base) + OMAP_HSMMC_##reg)
 
+struct omap_hsmmc_next {
+   unsigned intdma_len;
+   s32 cookie;
+};
+
 struct omap_hsmmc_host {
struct  device  *dev;
struct  mmc_host*mmc;
@@ -184,6 +189,7 @@ struct omap_hsmmc_host {
int reqs_blocked;
int use_reg;
int req_in_progress;
+   struct omap_hsmmc_next  next_data;
 
struct  omap_mmc_platform_data  *pdata;
 };
@@ -1347,8 +1353,9 @@ static void omap_hsmmc_dma_cb(int lch, u16 ch_status, 
void *cb_data)
return;
}
 
-   dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len,
-   omap_hsmmc_get_dma_dir(host, data));
+   if (!data->host_cookie)
+   dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len,
+omap_hsmmc_get_dma_dir(host, data));
 
req_in_progress = host->req_in_progress;
dma_ch = host->dma_ch;
@@ -1366,6 +1373,45 @@ static void omap_hsmmc_dma_cb(int lch, u16 ch_status, 
void *cb_data)
}
 }
 
+static int omap_hsmmc_pre_dma_transfer(struct omap_hsmmc_host *host,
+  struct mmc_data *data,
+  struct omap_hsmmc_next *next)
+{
+   int dma_len;
+
+   if (!next && data->host_cookie &&
+   data->host_cookie != host->next_data.cookie) {
+   printk(KERN_WARNING "[%s] invalid cookie: data->host_cookie %d"
+  " host->next_data.cookie %d\n",
+  __func__, data->host_cookie, host->next_data.cookie);
+   data->host_cookie = 0;
+   }
+
+   /* Check if next job is already prepared */
+   if (next ||
+   (!next && data->host_cookie != host->next_data.cookie)) {
+   dma_len = dma_map_sg(mmc_dev(host->mmc), data->sg,
+data->sg_len,
+omap_hsmmc_get_dma_dir(host, data));
+
+   } else {
+   dma_len = host->next_data.dma_len;
+   host->next_data.dma_len = 0;
+   }
+
+
+   if (dma_len == 0)
+   return -EINVAL;
+
+   if (next) {
+   next->dma_len = dma_len;
+   data->host_cookie = ++next->cookie < 0 ? 1 : next->cookie;
+   } else
+   host->dma_len = dma_len;
+
+   return 0;
+}
+
 /*
  * Routine to configure and start DMA for the MMC card
  */
@@ -1399,9 +1445,10 @@ static int omap_hsmmc_start_dma_transfer(struct 
omap_hsmmc_host *host,
mmc_hostname(host->mmc), ret);
return ret;
}
+   ret = omap_hsmmc_pre_dma_transfer(host, data, NULL);
+   if (ret)
+   return ret;
 
-   host->dma_len = dma_map_sg(mmc_dev(host->mmc), data->sg,
-   data->sg_len, omap_hsmmc_get_dma_dir(host, data));
host->dma_ch = dma_ch;
host->dma_sg_idx = 0;
 
@@ -1481,6 +1528,35 @@ omap_hsmmc_prepare_data(struct omap_hsmmc_host *host, 
struct mmc_request *req)
return 0;
 }
 
+static void omap_hsmmc_post_req(struct mmc_host *mmc, struct mmc_request *mrq,
+   int err)
+{
+   struct omap_hsmmc_host *host = mmc_priv(mmc);
+   struct mmc_data *data = mrq->data;
+
+   if (host->use_dma) {
+   dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len,
+omap_hsmmc_get_dma_dir(host, data));
+   data->host_cookie = 0;
+   }
+}
+
+static void omap_hsmmc_pre_req(struct mmc_host *mmc, struct mmc_request *mrq,
+  bool is_first_req)
+{
+   struct omap_hsmmc_host *host = mmc_priv(mmc);
+
+   if (mrq->data->host_cookie) {
+   mrq->data->host_cookie = 0;
+   return ;
+   }
+
+   if (host->use_dma)
+   if (omap_hsmmc_pre_dma_transfer(host, mrq->data,
+   &host->next_data))
+   mrq->data->host_cookie = 0;
+}
+
 /*
  * Request function. for read/write operation
  */
@@ -1929,6 +2005,8 @@ static int omap_hsmmc_disable_fclk(struct mmc_host *mmc, 
int lazy)
 static const struct mmc_host_ops omap_hsmmc_ops = {
 

[PATCH v5 04/12] mmci: implement pre_req() and post_req()

2011-06-18 Thread Per Forlin
pre_req() runs dma_map_sg() and prepares the dma descriptor
for the next mmc data transfer. post_req() runs dma_unmap_sg.
If not calling pre_req() before mmci_request(), mmci_request()
will prepare the cache and dma just like it did it before.
It is optional to use pre_req() and post_req() for mmci.

Signed-off-by: Per Forlin 
---
 drivers/mmc/host/mmci.c |  146 ++
 drivers/mmc/host/mmci.h |8 +++
 2 files changed, 141 insertions(+), 13 deletions(-)

diff --git a/drivers/mmc/host/mmci.c b/drivers/mmc/host/mmci.c
index 7721de9..8b21467 100644
--- a/drivers/mmc/host/mmci.c
+++ b/drivers/mmc/host/mmci.c
@@ -335,7 +335,8 @@ static void mmci_dma_unmap(struct mmci_host *host, struct 
mmc_data *data)
dir = DMA_FROM_DEVICE;
}
 
-   dma_unmap_sg(chan->device->dev, data->sg, data->sg_len, dir);
+   if (!data->host_cookie)
+   dma_unmap_sg(chan->device->dev, data->sg, data->sg_len, dir);
 
/*
 * Use of DMA with scatter-gather is impossible.
@@ -353,7 +354,8 @@ static void mmci_dma_data_error(struct mmci_host *host)
dmaengine_terminate_all(host->dma_current);
 }
 
-static int mmci_dma_start_data(struct mmci_host *host, unsigned int datactrl)
+static int mmci_dma_prep_data(struct mmci_host *host, struct mmc_data *data,
+ struct mmci_host_next *next)
 {
struct variant_data *variant = host->variant;
struct dma_slave_config conf = {
@@ -364,13 +366,20 @@ static int mmci_dma_start_data(struct mmci_host *host, 
unsigned int datactrl)
.src_maxburst = variant->fifohalfsize >> 2, /* # of words */
.dst_maxburst = variant->fifohalfsize >> 2, /* # of words */
};
-   struct mmc_data *data = host->data;
struct dma_chan *chan;
struct dma_device *device;
struct dma_async_tx_descriptor *desc;
int nr_sg;
 
-   host->dma_current = NULL;
+   /* Check if next job is already prepared */
+   if (data->host_cookie && !next &&
+   host->dma_current && host->dma_desc_current)
+   return 0;
+
+   if (!next) {
+   host->dma_current = NULL;
+   host->dma_desc_current = NULL;
+   }
 
if (data->flags & MMC_DATA_READ) {
conf.direction = DMA_FROM_DEVICE;
@@ -385,7 +394,7 @@ static int mmci_dma_start_data(struct mmci_host *host, 
unsigned int datactrl)
return -EINVAL;
 
/* If less than or equal to the fifo size, don't bother with DMA */
-   if (host->size <= variant->fifosize)
+   if (data->blksz * data->blocks <= variant->fifosize)
return -EINVAL;
 
device = chan->device;
@@ -399,14 +408,38 @@ static int mmci_dma_start_data(struct mmci_host *host, 
unsigned int datactrl)
if (!desc)
goto unmap_exit;
 
-   /* Okay, go for it. */
-   host->dma_current = chan;
+   if (next) {
+   next->dma_chan = chan;
+   next->dma_desc = desc;
+   } else {
+   host->dma_current = chan;
+   host->dma_desc_current = desc;
+   }
+
+   return 0;
 
+ unmap_exit:
+   if (!next)
+   dmaengine_terminate_all(chan);
+   dma_unmap_sg(device->dev, data->sg, data->sg_len, conf.direction);
+   return -ENOMEM;
+}
+
+static int mmci_dma_start_data(struct mmci_host *host, unsigned int datactrl)
+{
+   int ret;
+   struct mmc_data *data = host->data;
+
+   ret = mmci_dma_prep_data(host, host->data, NULL);
+   if (ret)
+   return ret;
+
+   /* Okay, go for it. */
dev_vdbg(mmc_dev(host->mmc),
 "Submit MMCI DMA job, sglen %d blksz %04x blks %04x flags 
%08x\n",
 data->sg_len, data->blksz, data->blocks, data->flags);
-   dmaengine_submit(desc);
-   dma_async_issue_pending(chan);
+   dmaengine_submit(host->dma_desc_current);
+   dma_async_issue_pending(host->dma_current);
 
datactrl |= MCI_DPSM_DMAENABLE;
 
@@ -421,14 +454,90 @@ static int mmci_dma_start_data(struct mmci_host *host, 
unsigned int datactrl)
writel(readl(host->base + MMCIMASK0) | MCI_DATAENDMASK,
   host->base + MMCIMASK0);
return 0;
+}
 
-unmap_exit:
-   dmaengine_terminate_all(chan);
-   dma_unmap_sg(device->dev, data->sg, data->sg_len, conf.direction);
-   return -ENOMEM;
+static void mmci_get_next_data(struct mmci_host *host, struct mmc_data *data)
+{
+   struct mmci_host_next *next = &host->next_data;
+
+   if (data->host_cookie && data->host_cookie != next->cookie) {
+   printk(KERN_WARNING "[%s] invalid cookie: data->host_cookie %d"
+  " host->next_data.cookie %d\n",
+  __func__, data->host_cookie, host->next_data.cookie);
+   data->host_cookie = 0;
+   }
+
+   if (!data->host_cookie)
+   return;
+
+   

[PATCH v5 05/12] mmc: mmc_test: add debugfs file to list all tests

2011-06-18 Thread Per Forlin
Add a debugfs file "testlist" to print all available tests

Signed-off-by: Per Forlin 
---
 drivers/mmc/card/mmc_test.c |   39 ++-
 1 files changed, 38 insertions(+), 1 deletions(-)

diff --git a/drivers/mmc/card/mmc_test.c b/drivers/mmc/card/mmc_test.c
index 233cdfa..e8508e9 100644
--- a/drivers/mmc/card/mmc_test.c
+++ b/drivers/mmc/card/mmc_test.c
@@ -2445,6 +2445,32 @@ static const struct file_operations mmc_test_fops_test = 
{
.release= single_release,
 };
 
+static int mtf_testlist_show(struct seq_file *sf, void *data)
+{
+   int i;
+
+   mutex_lock(&mmc_test_lock);
+
+   for (i = 0; i < ARRAY_SIZE(mmc_test_cases); i++)
+   seq_printf(sf, "%d:\t%s\n", i+1, mmc_test_cases[i].name);
+
+   mutex_unlock(&mmc_test_lock);
+
+   return 0;
+}
+
+static int mtf_testlist_open(struct inode *inode, struct file *file)
+{
+   return single_open(file, mtf_testlist_show, inode->i_private);
+}
+
+static const struct file_operations mmc_test_fops_testlist = {
+   .open   = mtf_testlist_open,
+   .read   = seq_read,
+   .llseek = seq_lseek,
+   .release= single_release,
+};
+
 static void mmc_test_free_file_test(struct mmc_card *card)
 {
struct mmc_test_dbgfs_file *df, *dfs;
@@ -2476,7 +2502,18 @@ static int mmc_test_register_file_test(struct mmc_card 
*card)
 
if (IS_ERR_OR_NULL(file)) {
dev_err(&card->dev,
-   "Can't create file. Perhaps debugfs is disabled.\n");
+   "Can't create test. Perhaps debugfs is disabled.\n");
+   ret = -ENODEV;
+   goto err;
+   }
+
+   if (card->debugfs_root)
+   file = debugfs_create_file("testlist", S_IRUGO,
+   card->debugfs_root, card, &mmc_test_fops_testlist);
+
+   if (IS_ERR_OR_NULL(file)) {
+   dev_err(&card->dev,
+   "Can't create testlist. Perhaps debugfs is 
disabled.\n");
ret = -ENODEV;
goto err;
}
-- 
1.7.4.1


___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


[PATCH v5 06/12] mmc: mmc_test: add test for none blocking transfers

2011-06-18 Thread Per Forlin
Add four tests for read and write performance per
different transfer size, 4k to 4M.
 * Read using blocking mmc request
 * Read using none blocking mmc request
 * Write using blocking mmc request
 * Write using none blocking mmc request

The host dirver must support pre_req() and post_req()
in order to run the none blocking test cases.

Signed-off-by: Per Forlin 
---
 drivers/mmc/card/mmc_test.c |  322 +--
 1 files changed, 313 insertions(+), 9 deletions(-)

diff --git a/drivers/mmc/card/mmc_test.c b/drivers/mmc/card/mmc_test.c
index e8508e9..19e1132 100644
--- a/drivers/mmc/card/mmc_test.c
+++ b/drivers/mmc/card/mmc_test.c
@@ -22,6 +22,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #define RESULT_OK  0
 #define RESULT_FAIL1
@@ -51,10 +52,12 @@ struct mmc_test_pages {
  * struct mmc_test_mem - allocated memory.
  * @arr: array of allocations
  * @cnt: number of allocations
+ * @size_min_cmn: lowest common size in array of allocations
  */
 struct mmc_test_mem {
struct mmc_test_pages *arr;
unsigned int cnt;
+   unsigned int size_min_cmn;
 };
 
 /**
@@ -148,6 +151,26 @@ struct mmc_test_card {
struct mmc_test_general_result  *gr;
 };
 
+enum mmc_test_prep_media {
+   MMC_TEST_PREP_NONE = 0,
+   MMC_TEST_PREP_WRITE_FULL = 1 << 0,
+   MMC_TEST_PREP_ERASE = 1 << 1,
+};
+
+struct mmc_test_multiple_rw {
+   unsigned int *bs;
+   unsigned int len;
+   unsigned int size;
+   bool do_write;
+   bool do_nonblock_req;
+   enum mmc_test_prep_media prepare;
+};
+
+struct mmc_test_async_req {
+   struct mmc_async_req areq;
+   struct mmc_test_card *test;
+};
+
 /***/
 /*  General helper functions   */
 /***/
@@ -302,6 +325,7 @@ static struct mmc_test_mem *mmc_test_alloc_mem(unsigned 
long min_sz,
unsigned long max_seg_page_cnt = DIV_ROUND_UP(max_seg_sz, PAGE_SIZE);
unsigned long page_cnt = 0;
unsigned long limit = nr_free_buffer_pages() >> 4;
+   unsigned int min_cmn = 0;
struct mmc_test_mem *mem;
 
if (max_page_cnt > limit)
@@ -345,6 +369,12 @@ static struct mmc_test_mem *mmc_test_alloc_mem(unsigned 
long min_sz,
mem->arr[mem->cnt].page = page;
mem->arr[mem->cnt].order = order;
mem->cnt += 1;
+   if (!min_cmn)
+   min_cmn = PAGE_SIZE << order;
+   else
+   min_cmn = min(min_cmn,
+ (unsigned int) (PAGE_SIZE << order));
+
if (max_page_cnt <= (1UL << order))
break;
max_page_cnt -= 1UL << order;
@@ -355,6 +385,7 @@ static struct mmc_test_mem *mmc_test_alloc_mem(unsigned 
long min_sz,
break;
}
}
+   mem->size_min_cmn = min_cmn;
 
return mem;
 
@@ -381,7 +412,6 @@ static int mmc_test_map_sg(struct mmc_test_mem *mem, 
unsigned long sz,
do {
for (i = 0; i < mem->cnt; i++) {
unsigned long len = PAGE_SIZE << mem->arr[i].order;
-
if (len > sz)
len = sz;
if (len > max_seg_sz)
@@ -661,7 +691,7 @@ static void mmc_test_prepare_broken_mrq(struct 
mmc_test_card *test,
  * Checks that a normal transfer didn't have any errors
  */
 static int mmc_test_check_result(struct mmc_test_card *test,
-   struct mmc_request *mrq)
+struct mmc_request *mrq)
 {
int ret;
 
@@ -685,6 +715,16 @@ static int mmc_test_check_result(struct mmc_test_card 
*test,
return ret;
 }
 
+
+static int mmc_test_check_result_async(struct mmc_card *card,
+  struct mmc_async_req *areq)
+{
+   struct mmc_test_async_req *test_async =
+   container_of(areq, struct mmc_test_async_req, areq);
+
+   return mmc_test_check_result(test_async->test, areq->mrq);
+}
+
 /*
  * Checks that a "short transfer" behaved as expected
  */
@@ -720,6 +760,89 @@ static int mmc_test_check_broken_result(struct 
mmc_test_card *test,
 }
 
 /*
+ * Tests nonblock transfer with certain parameters
+ */
+static void mmc_test_nonblock_reset(struct mmc_request *mrq,
+   struct mmc_command *cmd,
+   struct mmc_command *stop,
+   struct mmc_data *data)
+{
+   memset(mrq, 0, sizeof(struct mmc_request));
+   memset(cmd, 0, sizeof(struct mmc_command));
+   memset(data, 0, sizeof(struct mmc_data));
+   memset(stop, 0, sizeof(struct mmc_command));
+
+   mrq->cmd = cmd;
+   mrq->data = data;
+   mrq->stop = stop;
+}
+static int mmc_test_nonblock_transfer(struct

[PATCH v5 07/12] mmc: add member in mmc queue struct to hold request data

2011-06-18 Thread Per Forlin
The way the request data is organized in the mmc queue struct
it only allows processing of one request at the time.
This patch adds a new struct to hold mmc queue request data such as
sg list, request, blk request and bounce buffers, and updates any functions
depending on the mmc queue struct. This lies the ground for
using multiple active request for one mmc queue.

Signed-off-by: Per Forlin 
---
 drivers/mmc/card/block.c |  125 +---
 drivers/mmc/card/queue.c |  129 --
 drivers/mmc/card/queue.h |   31 ---
 3 files changed, 149 insertions(+), 136 deletions(-)

diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
index 71da564..3d11690 100644
--- a/drivers/mmc/card/block.c
+++ b/drivers/mmc/card/block.c
@@ -427,14 +427,6 @@ static const struct block_device_operations mmc_bdops = {
 #endif
 };
 
-struct mmc_blk_request {
-   struct mmc_request  mrq;
-   struct mmc_command  sbc;
-   struct mmc_command  cmd;
-   struct mmc_command  stop;
-   struct mmc_data data;
-};
-
 static inline int mmc_blk_part_switch(struct mmc_card *card,
  struct mmc_blk_data *md)
 {
@@ -673,7 +665,7 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct 
request *req)
 {
struct mmc_blk_data *md = mq->data;
struct mmc_card *card = md->queue.card;
-   struct mmc_blk_request brq;
+   struct mmc_blk_request *brq = &mq->mqrq_cur->brq;
int ret = 1, disable_multi = 0;
 
/*
@@ -689,56 +681,56 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, 
struct request *req)
struct mmc_command cmd = {0};
u32 readcmd, writecmd, status = 0;
 
-   memset(&brq, 0, sizeof(struct mmc_blk_request));
-   brq.mrq.cmd = &brq.cmd;
-   brq.mrq.data = &brq.data;
+   memset(brq, 0, sizeof(struct mmc_blk_request));
+   brq->mrq.cmd = &brq->cmd;
+   brq->mrq.data = &brq->data;
 
-   brq.cmd.arg = blk_rq_pos(req);
+   brq->cmd.arg = blk_rq_pos(req);
if (!mmc_card_blockaddr(card))
-   brq.cmd.arg <<= 9;
-   brq.cmd.flags = MMC_RSP_SPI_R1 | MMC_RSP_R1 | MMC_CMD_ADTC;
-   brq.data.blksz = 512;
-   brq.stop.opcode = MMC_STOP_TRANSMISSION;
-   brq.stop.arg = 0;
-   brq.stop.flags = MMC_RSP_SPI_R1B | MMC_RSP_R1B | MMC_CMD_AC;
-   brq.data.blocks = blk_rq_sectors(req);
+   brq->cmd.arg <<= 9;
+   brq->cmd.flags = MMC_RSP_SPI_R1 | MMC_RSP_R1 | MMC_CMD_ADTC;
+   brq->data.blksz = 512;
+   brq->stop.opcode = MMC_STOP_TRANSMISSION;
+   brq->stop.arg = 0;
+   brq->stop.flags = MMC_RSP_SPI_R1B | MMC_RSP_R1B | MMC_CMD_AC;
+   brq->data.blocks = blk_rq_sectors(req);
 
/*
 * The block layer doesn't support all sector count
 * restrictions, so we need to be prepared for too big
 * requests.
 */
-   if (brq.data.blocks > card->host->max_blk_count)
-   brq.data.blocks = card->host->max_blk_count;
+   if (brq->data.blocks > card->host->max_blk_count)
+   brq->data.blocks = card->host->max_blk_count;
 
/*
 * After a read error, we redo the request one sector at a time
 * in order to accurately determine which sectors can be read
 * successfully.
 */
-   if (disable_multi && brq.data.blocks > 1)
-   brq.data.blocks = 1;
+   if (disable_multi && brq->data.blocks > 1)
+   brq->data.blocks = 1;
 
-   if (brq.data.blocks > 1 || do_rel_wr) {
+   if (brq->data.blocks > 1 || do_rel_wr) {
/* SPI multiblock writes terminate using a special
 * token, not a STOP_TRANSMISSION request.
 */
if (!mmc_host_is_spi(card->host) ||
rq_data_dir(req) == READ)
-   brq.mrq.stop = &brq.stop;
+   brq->mrq.stop = &brq->stop;
readcmd = MMC_READ_MULTIPLE_BLOCK;
writecmd = MMC_WRITE_MULTIPLE_BLOCK;
} else {
-   brq.mrq.stop = NULL;
+   brq->mrq.stop = NULL;
readcmd = MMC_READ_SINGLE_BLOCK;
writecmd = MMC_WRITE_BLOCK;
}
if (rq_data_dir(req) == READ) {
-   brq.cmd.opcode = readcmd;
-   brq.data.flags |= MMC_DATA_READ;
+   brq->cmd.opcode = readcm

[PATCH v5 08/12] mmc: add a block request prepare function

2011-06-18 Thread Per Forlin
Break out code from mmc_blk_issue_rw_rq to create a
block request prepare function. This doesn't change
any functionallity. This helps when handling more
than one active block request.

Signed-off-by: Per Forlin 
---
 drivers/mmc/card/block.c |  224 -
 1 files changed, 119 insertions(+), 105 deletions(-)

diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
index 3d11690..144d435 100644
--- a/drivers/mmc/card/block.c
+++ b/drivers/mmc/card/block.c
@@ -661,12 +661,15 @@ static inline void mmc_apply_rel_rw(struct 
mmc_blk_request *brq,
}
 }
 
-static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *req)
+static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
+  struct mmc_card *card,
+  int disable_multi,
+  struct mmc_queue *mq)
 {
+   u32 readcmd, writecmd;
+   struct mmc_blk_request *brq = &mqrq->brq;
+   struct request *req = mqrq->req;
struct mmc_blk_data *md = mq->data;
-   struct mmc_card *card = md->queue.card;
-   struct mmc_blk_request *brq = &mq->mqrq_cur->brq;
-   int ret = 1, disable_multi = 0;
 
/*
 * Reliable writes are used to implement Forced Unit Access and
@@ -677,120 +680,131 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, 
struct request *req)
(rq_data_dir(req) == WRITE) &&
(md->flags & MMC_BLK_REL_WR);
 
-   do {
-   struct mmc_command cmd = {0};
-   u32 readcmd, writecmd, status = 0;
-
-   memset(brq, 0, sizeof(struct mmc_blk_request));
-   brq->mrq.cmd = &brq->cmd;
-   brq->mrq.data = &brq->data;
-
-   brq->cmd.arg = blk_rq_pos(req);
-   if (!mmc_card_blockaddr(card))
-   brq->cmd.arg <<= 9;
-   brq->cmd.flags = MMC_RSP_SPI_R1 | MMC_RSP_R1 | MMC_CMD_ADTC;
-   brq->data.blksz = 512;
-   brq->stop.opcode = MMC_STOP_TRANSMISSION;
-   brq->stop.arg = 0;
-   brq->stop.flags = MMC_RSP_SPI_R1B | MMC_RSP_R1B | MMC_CMD_AC;
-   brq->data.blocks = blk_rq_sectors(req);
-
-   /*
-* The block layer doesn't support all sector count
-* restrictions, so we need to be prepared for too big
-* requests.
-*/
-   if (brq->data.blocks > card->host->max_blk_count)
-   brq->data.blocks = card->host->max_blk_count;
+   memset(brq, 0, sizeof(struct mmc_blk_request));
+   brq->mrq.cmd = &brq->cmd;
+   brq->mrq.data = &brq->data;
 
-   /*
-* After a read error, we redo the request one sector at a time
-* in order to accurately determine which sectors can be read
-* successfully.
-*/
-   if (disable_multi && brq->data.blocks > 1)
-   brq->data.blocks = 1;
+   brq->cmd.arg = blk_rq_pos(req);
+   if (!mmc_card_blockaddr(card))
+   brq->cmd.arg <<= 9;
+   brq->cmd.flags = MMC_RSP_SPI_R1 | MMC_RSP_R1 | MMC_CMD_ADTC;
+   brq->data.blksz = 512;
+   brq->stop.opcode = MMC_STOP_TRANSMISSION;
+   brq->stop.arg = 0;
+   brq->stop.flags = MMC_RSP_SPI_R1B | MMC_RSP_R1B | MMC_CMD_AC;
+   brq->data.blocks = blk_rq_sectors(req);
 
-   if (brq->data.blocks > 1 || do_rel_wr) {
-   /* SPI multiblock writes terminate using a special
-* token, not a STOP_TRANSMISSION request.
-*/
-   if (!mmc_host_is_spi(card->host) ||
-   rq_data_dir(req) == READ)
-   brq->mrq.stop = &brq->stop;
-   readcmd = MMC_READ_MULTIPLE_BLOCK;
-   writecmd = MMC_WRITE_MULTIPLE_BLOCK;
-   } else {
-   brq->mrq.stop = NULL;
-   readcmd = MMC_READ_SINGLE_BLOCK;
-   writecmd = MMC_WRITE_BLOCK;
-   }
-   if (rq_data_dir(req) == READ) {
-   brq->cmd.opcode = readcmd;
-   brq->data.flags |= MMC_DATA_READ;
-   } else {
-   brq->cmd.opcode = writecmd;
-   brq->data.flags |= MMC_DATA_WRITE;
-   }
+   /*
+* The block layer doesn't support all sector count
+* restrictions, so we need to be prepared for too big
+* requests.
+*/
+   if (brq->data.blocks > card->host->max_blk_count)
+   brq->data.blocks = card->host->max_blk_count;
 
-   if (do_rel_wr)
-   mmc_apply_rel_rw(&brq, card, req);
+   /*
+* After a read error, we redo the request one sector at a time
+* in order to accurately determine 

[PATCH v5 09/12] mmc: move error code in mmc_block_issue_rw_rq to a separate function.

2011-06-18 Thread Per Forlin
Break out code without functional changes. This simplifies the code and
makes way for handle two parallel request.

Signed-off-by: Per Forlin 
---
 drivers/mmc/card/block.c |  246 ++---
 1 files changed, 142 insertions(+), 104 deletions(-)

diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
index 144d435..6a84a75 100644
--- a/drivers/mmc/card/block.c
+++ b/drivers/mmc/card/block.c
@@ -106,6 +106,13 @@ struct mmc_blk_data {
 
 static DEFINE_MUTEX(open_lock);
 
+enum mmc_blk_status {
+   MMC_BLK_SUCCESS = 0,
+   MMC_BLK_RETRY,
+   MMC_BLK_DATA_ERR,
+   MMC_BLK_CMD_ERR,
+};
+
 module_param(perdev_minors, int, 0444);
 MODULE_PARM_DESC(perdev_minors, "Minors numbers to allocate per device");
 
@@ -661,6 +668,112 @@ static inline void mmc_apply_rel_rw(struct 
mmc_blk_request *brq,
}
 }
 
+static enum mmc_blk_status mmc_blk_err_check(struct mmc_blk_request *brq,
+struct request *req,
+struct mmc_card *card,
+struct mmc_blk_data *md)
+{
+   struct mmc_command cmd;
+   u32 status = 0;
+   enum mmc_blk_status ret = MMC_BLK_SUCCESS;
+
+   /*
+* Check for errors here, but don't jump to cmd_err
+* until later as we need to wait for the card to leave
+* programming mode even when things go wrong.
+*/
+   if (brq->sbc.error || brq->cmd.error ||
+   brq->data.error || brq->stop.error) {
+   if (brq->data.blocks > 1 && rq_data_dir(req) == READ) {
+   /* Redo read one sector at a time */
+   printk(KERN_WARNING "%s: retrying using single "
+  "block read\n", req->rq_disk->disk_name);
+   ret = MMC_BLK_RETRY;
+   goto out;
+   }
+   status = get_card_status(card, req);
+   }
+
+   if (brq->sbc.error) {
+   printk(KERN_ERR "%s: error %d sending SET_BLOCK_COUNT "
+  "command, response %#x, card status %#x\n",
+  req->rq_disk->disk_name, brq->sbc.error,
+  brq->sbc.resp[0], status);
+   }
+
+   if (brq->cmd.error) {
+   printk(KERN_ERR "%s: error %d sending read/write "
+  "command, response %#x, card status %#x\n",
+  req->rq_disk->disk_name, brq->cmd.error,
+  brq->cmd.resp[0], status);
+   }
+
+   if (brq->data.error) {
+   if (brq->data.error == -ETIMEDOUT && brq->mrq.stop)
+   /* 'Stop' response contains card status */
+   status = brq->mrq.stop->resp[0];
+   printk(KERN_ERR "%s: error %d transferring data,"
+  " sector %u, nr %u, card status %#x\n",
+  req->rq_disk->disk_name, brq->data.error,
+  (unsigned)blk_rq_pos(req),
+  (unsigned)blk_rq_sectors(req), status);
+   }
+
+   if (brq->stop.error) {
+   printk(KERN_ERR "%s: error %d sending stop command, "
+  "response %#x, card status %#x\n",
+  req->rq_disk->disk_name, brq->stop.error,
+  brq->stop.resp[0], status);
+   }
+
+   if (!mmc_host_is_spi(card->host) && rq_data_dir(req) != READ) {
+   do {
+   int err;
+
+   cmd.opcode = MMC_SEND_STATUS;
+   cmd.arg = card->rca << 16;
+   cmd.flags = MMC_RSP_R1 | MMC_CMD_AC;
+   err = mmc_wait_for_cmd(card->host, &cmd, 5);
+   if (err) {
+   printk(KERN_ERR "%s: error %d requesting 
status\n",
+  req->rq_disk->disk_name, err);
+   ret = MMC_BLK_CMD_ERR;
+   goto out;
+   }
+   /*
+* Some cards mishandle the status bits,
+* so make sure to check both the busy
+* indication and the card state.
+*/
+   } while (!(cmd.resp[0] & R1_READY_FOR_DATA) ||
+(R1_CURRENT_STATE(cmd.resp[0]) == 7));
+
+#if 0
+   if (cmd.resp[0] & ~0x0900)
+   printk(KERN_ERR "%s: status = %08x\n",
+  req->rq_disk->disk_name, cmd.resp[0]);
+   if (mmc_decode_status(cmd.resp)) {
+   ret = MMC_BLK_CMD_ERR;
+   goto out;
+   }
+#endif
+   }
+
+   if (brq->cmd.error || brq->stop.error || brq->data.error) {
+   if (rq_data_dir(req) == READ)
+   /*
+* After an er

[PATCH v5 10/12] mmc: add a second mmc queue request member

2011-06-18 Thread Per Forlin
Add an additional mmc queue request instance to make way for
two active block requests. One request may be active while the
other request is being prepared.

Signed-off-by: Per Forlin 
---
 drivers/mmc/card/queue.c |   44 ++--
 drivers/mmc/card/queue.h |3 ++-
 2 files changed, 44 insertions(+), 3 deletions(-)

diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c
index 81d0eef..0757a39 100644
--- a/drivers/mmc/card/queue.c
+++ b/drivers/mmc/card/queue.c
@@ -130,6 +130,7 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card 
*card, spinlock_t *lock
u64 limit = BLK_BOUNCE_HIGH;
int ret;
struct mmc_queue_req *mqrq_cur = &mq->mqrq[0];
+   struct mmc_queue_req *mqrq_prev = &mq->mqrq[1];
 
if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask)
limit = *mmc_dev(host)->dma_mask;
@@ -140,7 +141,9 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card 
*card, spinlock_t *lock
return -ENOMEM;
 
memset(&mq->mqrq_cur, 0, sizeof(mq->mqrq_cur));
+   memset(&mq->mqrq_prev, 0, sizeof(mq->mqrq_prev));
mq->mqrq_cur = mqrq_cur;
+   mq->mqrq_prev = mqrq_prev;
mq->queue->queuedata = mq;
 
blk_queue_prep_rq(mq->queue, mmc_prep_request);
@@ -181,9 +184,17 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card 
*card, spinlock_t *lock
"allocate bounce cur buffer\n",
mmc_card_name(card));
}
+   mqrq_prev->bounce_buf = kmalloc(bouncesz, GFP_KERNEL);
+   if (!mqrq_prev->bounce_buf) {
+   printk(KERN_WARNING "%s: unable to "
+   "allocate bounce prev buffer\n",
+   mmc_card_name(card));
+   kfree(mqrq_cur->bounce_buf);
+   mqrq_cur->bounce_buf = NULL;
+   }
}
 
-   if (mqrq_cur->bounce_buf) {
+   if (mqrq_cur->bounce_buf && mqrq_prev->bounce_buf) {
blk_queue_bounce_limit(mq->queue, BLK_BOUNCE_ANY);
blk_queue_max_hw_sectors(mq->queue, bouncesz / 512);
blk_queue_max_segments(mq->queue, bouncesz / 512);
@@ -198,11 +209,19 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card 
*card, spinlock_t *lock
if (ret)
goto cleanup_queue;
 
+   mqrq_prev->sg = mmc_alloc_sg(1, &ret);
+   if (ret)
+   goto cleanup_queue;
+
+   mqrq_prev->bounce_sg =
+   mmc_alloc_sg(bouncesz / 512, &ret);
+   if (ret)
+   goto cleanup_queue;
}
}
 #endif
 
-   if (!mqrq_cur->bounce_buf) {
+   if (!mqrq_cur->bounce_buf && !mqrq_prev->bounce_buf) {
blk_queue_bounce_limit(mq->queue, limit);
blk_queue_max_hw_sectors(mq->queue,
min(host->max_blk_count, host->max_req_size / 512));
@@ -213,6 +232,10 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card 
*card, spinlock_t *lock
if (ret)
goto cleanup_queue;
 
+
+   mqrq_prev->sg = mmc_alloc_sg(host->max_segs, &ret);
+   if (ret)
+   goto cleanup_queue;
}
 
sema_init(&mq->thread_sem, 1);
@@ -229,6 +252,8 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card 
*card, spinlock_t *lock
  free_bounce_sg:
kfree(mqrq_cur->bounce_sg);
mqrq_cur->bounce_sg = NULL;
+   kfree(mqrq_prev->bounce_sg);
+   mqrq_prev->bounce_sg = NULL;
 
  cleanup_queue:
kfree(mqrq_cur->sg);
@@ -236,6 +261,11 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card 
*card, spinlock_t *lock
kfree(mqrq_cur->bounce_buf);
mqrq_cur->bounce_buf = NULL;
 
+   kfree(mqrq_prev->sg);
+   mqrq_prev->sg = NULL;
+   kfree(mqrq_prev->bounce_buf);
+   mqrq_prev->bounce_buf = NULL;
+
blk_cleanup_queue(mq->queue);
return ret;
 }
@@ -245,6 +275,7 @@ void mmc_cleanup_queue(struct mmc_queue *mq)
struct request_queue *q = mq->queue;
unsigned long flags;
struct mmc_queue_req *mqrq_cur = mq->mqrq_cur;
+   struct mmc_queue_req *mqrq_prev = mq->mqrq_prev;
 
/* Make sure the queue isn't suspended, as that will deadlock */
mmc_queue_resume(mq);
@@ -267,6 +298,15 @@ void mmc_cleanup_queue(struct mmc_queue *mq)
kfree(mqrq_cur->bounce_buf);
mqrq_cur->bounce_buf = NULL;
 
+   kfree(mqrq_prev->bounce_sg);
+   mqrq_prev->bounce_sg = NULL;
+
+   kfree(mqrq_prev->sg);
+   mqrq_prev->sg = NULL;
+
+   kfree(mqrq_prev->

[PATCH v5 11/12] mmc: test: add random fault injection in core.c

2011-06-18 Thread Per Forlin
This simple fault injection proved to be very useful to
test the error handling in the block.c rw_rq(). It may
still be useful to test if the host driver handle
pre_req() and post_req() correctly in case of errors.

Signed-off-by: Per Forlin 
---
 drivers/mmc/core/core.c|   54 
 drivers/mmc/core/debugfs.c |5 
 include/linux/mmc/host.h   |3 ++
 lib/Kconfig.debug  |   11 +
 4 files changed, 73 insertions(+), 0 deletions(-)

diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index 3538166..9128054 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -23,6 +23,8 @@
 #include 
 #include 
 #include 
+#include 
+#include 
 
 #include 
 #include 
@@ -82,6 +84,56 @@ static void mmc_flush_scheduled_work(void)
flush_workqueue(workqueue);
 }
 
+#ifdef CONFIG_FAIL_MMC_REQUEST
+
+static DECLARE_FAULT_ATTR(fail_mmc_request);
+
+static int __init setup_fail_mmc_request(char *str)
+{
+   return setup_fault_attr(&fail_mmc_request, str);
+}
+__setup("fail_mmc_request=", setup_fail_mmc_request);
+
+static void mmc_should_fail_request(struct mmc_host *host,
+   struct mmc_request *mrq)
+{
+   struct mmc_command *cmd = mrq->cmd;
+   struct mmc_data *data = mrq->data;
+   static const int data_errors[] = {
+   -ETIMEDOUT,
+   -EILSEQ,
+   -EIO,
+   };
+
+   if (!data)
+   return;
+
+   if (cmd->error || data->error || !host->make_it_fail ||
+   !should_fail(&fail_mmc_request, data->blksz * data->blocks))
+   return;
+
+   data->error = data_errors[random32() % ARRAY_SIZE(data_errors)];
+   data->bytes_xfered = (random32() % (data->bytes_xfered >> 9)) << 9;
+}
+
+static int __init fail_mmc_request_debugfs(void)
+{
+   return init_fault_attr_dentries(&fail_mmc_request,
+   "fail_mmc_request");
+}
+
+late_initcall(fail_mmc_request_debugfs);
+
+#else /* CONFIG_FAIL_MMC_REQUEST */
+
+static void mmc_should_fail_request(struct mmc_host *host,
+   struct mmc_request *mrq)
+{
+}
+
+#endif /* CONFIG_FAIL_MMC_REQUEST */
+
+
 /**
  * mmc_request_done - finish processing an MMC request
  * @host: MMC host which completed request
@@ -108,6 +160,8 @@ void mmc_request_done(struct mmc_host *host, struct 
mmc_request *mrq)
cmd->error = 0;
host->ops->request(host, mrq);
} else {
+   mmc_should_fail_request(host, mrq);
+
led_trigger_event(host->led, LED_OFF);
 
pr_debug("%s: req done (CMD%u): %d: %08x %08x %08x %08x\n",
diff --git a/drivers/mmc/core/debugfs.c b/drivers/mmc/core/debugfs.c
index 998797e..588e76f 100644
--- a/drivers/mmc/core/debugfs.c
+++ b/drivers/mmc/core/debugfs.c
@@ -188,6 +188,11 @@ void mmc_add_host_debugfs(struct mmc_host *host)
root, &host->clk_delay))
goto err_node;
 #endif
+#ifdef CONFIG_FAIL_MMC_REQUEST
+   if (!debugfs_create_u8("make-it-fail", S_IRUSR | S_IWUSR,
+  root, &host->make_it_fail))
+   goto err_node;
+#endif
return;
 
 err_node:
diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
index 01e7d2b..58414d1d 100644
--- a/include/linux/mmc/host.h
+++ b/include/linux/mmc/host.h
@@ -302,6 +302,9 @@ struct mmc_host {
 
struct mmc_async_req*areq;  /* active async req */
 
+#ifdef CONFIG_FAIL_MMC_REQUEST
+   u8  make_it_fail;
+#endif
unsigned long   private[0] cacheline_aligned;
 };
 
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index dd373c8..04f50d8 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1067,6 +1067,17 @@ config FAIL_IO_TIMEOUT
  Only works with drivers that use the generic timeout handling,
  for others it wont do anything.
 
+config FAIL_MMC_REQUEST
+   bool "Fault-injection capability for MMC IO"
+   select DEBUG_FS
+   depends on FAULT_INJECTION
+   help
+ Provide fault-injection capability for MMC IO.
+ This will make the mmc core return data errors. This is
+ useful for testing the error handling in the mmc block device
+ and how the mmc host driver handle retries from
+ the block device.
+
 config FAULT_INJECTION_DEBUG_FS
bool "Debugfs entries for fault-injection capabilities"
depends on FAULT_INJECTION && SYSFS && DEBUG_FS
-- 
1.7.4.1


___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


[PATCH v5 12/12] mmc: add handling for two parallel block requests in issue_rw_rq

2011-06-18 Thread Per Forlin
Change mmc_blk_issue_rw_rq() to become asynchronous.
The execution flow looks like this:
The mmc-queue calls issue_rw_rq(), which sends the request
to the host and returns back to the mmc-queue. The mmc-queue calls
issue_rw_rq() again with a new request. This new request is prepared,
in isuue_rw_rq(), then it waits for the active request to complete before
pushing it to the host. When to mmc-queue is empty it will call
isuue_rw_rq() with req=NULL to finish off the active request
without starting a new request.

Signed-off-by: Per Forlin 
---
 drivers/mmc/card/block.c |  121 +-
 drivers/mmc/card/queue.c |   17 +--
 drivers/mmc/card/queue.h |1 +
 3 files changed, 101 insertions(+), 38 deletions(-)

diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
index 6a84a75..66db77a 100644
--- a/drivers/mmc/card/block.c
+++ b/drivers/mmc/card/block.c
@@ -108,6 +108,7 @@ static DEFINE_MUTEX(open_lock);
 
 enum mmc_blk_status {
MMC_BLK_SUCCESS = 0,
+   MMC_BLK_PARTIAL,
MMC_BLK_RETRY,
MMC_BLK_DATA_ERR,
MMC_BLK_CMD_ERR,
@@ -668,14 +669,16 @@ static inline void mmc_apply_rel_rw(struct 
mmc_blk_request *brq,
}
 }
 
-static enum mmc_blk_status mmc_blk_err_check(struct mmc_blk_request *brq,
-struct request *req,
-struct mmc_card *card,
-struct mmc_blk_data *md)
+static int mmc_blk_err_check(struct mmc_card *card,
+struct mmc_async_req *areq)
 {
struct mmc_command cmd;
u32 status = 0;
enum mmc_blk_status ret = MMC_BLK_SUCCESS;
+   struct mmc_queue_req *mq_mrq = container_of(areq, struct mmc_queue_req,
+   mmc_active);
+   struct mmc_blk_request *brq = &mq_mrq->brq;
+   struct request *req = mq_mrq->req;
 
/*
 * Check for errors here, but don't jump to cmd_err
@@ -770,7 +773,11 @@ static enum mmc_blk_status mmc_blk_err_check(struct 
mmc_blk_request *brq,
else
ret = MMC_BLK_DATA_ERR;
}
-out:
+
+   if (ret == MMC_BLK_SUCCESS &&
+   blk_rq_bytes(req) != brq->data.bytes_xfered)
+   ret = MMC_BLK_PARTIAL;
+ out:
return ret;
 }
 
@@ -901,27 +908,59 @@ static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
brq->data.sg_len = i;
}
 
+   mqrq->mmc_active.mrq = &brq->mrq;
+   mqrq->mmc_active.err_check = mmc_blk_err_check;
+
mmc_queue_bounce_pre(mqrq);
 }
 
-static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *req)
+static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
 {
struct mmc_blk_data *md = mq->data;
struct mmc_card *card = md->queue.card;
-   struct mmc_blk_request *brq = &mq->mqrq_cur->brq;
-   int ret = 1, disable_multi = 0;
+   struct mmc_blk_request *brq;
+   int ret = 1;
+   int disable_multi = 0;
enum mmc_blk_status status;
+   struct mmc_queue_req *mq_rq;
+   struct request *req;
+   struct mmc_async_req *areq;
+
+   if (!rqc && !mq->mqrq_prev->req)
+   goto out;
 
do {
-   mmc_blk_rw_rq_prep(mq->mqrq_cur, card, disable_multi, mq);
-   mmc_wait_for_req(card->host, &brq->mrq);
+   if (rqc) {
+   mmc_blk_rw_rq_prep(mq->mqrq_cur, card, 0, mq);
+   areq = &mq->mqrq_cur->mmc_active;
+   } else
+   areq = NULL;
+   areq = mmc_start_req(card->host, areq, (int *) &status);
+   if (!areq)
+   goto out;
 
-   mmc_queue_bounce_post(mq->mqrq_cur);
-   status = mmc_blk_err_check(brq, req, card, md);
+   mq_rq = container_of(areq, struct mmc_queue_req, mmc_active);
+   brq = &mq_rq->brq;
+   req = mq_rq->req;
+   mmc_queue_bounce_post(mq_rq);
 
switch (status) {
-   case MMC_BLK_CMD_ERR:
-   goto cmd_err;
+   case MMC_BLK_SUCCESS:
+   case MMC_BLK_PARTIAL:
+   /*
+* A block was successfully transferred.
+*/
+   spin_lock_irq(&md->lock);
+   ret = __blk_end_request(req, 0,
+   brq->data.bytes_xfered);
+   spin_unlock_irq(&md->lock);
+   if (status == MMC_BLK_SUCCESS && ret) {
+   /* If this happen it is a bug */
+   printk(KERN_ERR "%s BUG rq_tot %d d_xfer %d\n",
+  __func__, blk_rq_bytes(req),
+  brq->data.bytes_xfered);
+  

Re: [GIT PULL] fix for bug 754254

2011-06-18 Thread Nicolas Pitre
On Fri, 17 Jun 2011, Shawn Guo wrote:

> Hi Nicolas,
> 
> Could you pull the fix for [Bug 754254] imx51 randomly truncates
> serial input at 31 characters?
> 
> It extends the card CD/WP support for mx5 platforms, and adds the
> board level configuration for mx51evk to fix bug 754254 on this
> particular board.  Other boards need to add their board level
> configuration to actually enable the support.

Done.

Actually I did merge and pushed it yesterday but forgot to acknowledge.


Nicolas

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: [PATCH RESEND] omap_hsmmc: use original sg_len for dma_unmap_sg

2011-06-18 Thread Chris Ball
Hi Per,

On Fri, Jun 17 2011, Per Forlin wrote:
> Don't use the returned sg_len from dma_map_sg() as inparameter
> to dma_unmap_sg(). Use the original sg_len for both dma_map_sg
> and dma_unmap_sg according to the documentation in DMA-API.txt.
>
> Signed-off-by: Per Forlin 
> Reviewed-by: Venkatraman S 
> ---
>  drivers/mmc/host/omap_hsmmc.c |5 +++--
>  1 files changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/mmc/host/omap_hsmmc.c b/drivers/mmc/host/omap_hsmmc.c
> index 259ece0..ad3731a 100644
> --- a/drivers/mmc/host/omap_hsmmc.c
> +++ b/drivers/mmc/host/omap_hsmmc.c
> @@ -959,7 +959,8 @@ static void omap_hsmmc_dma_cleanup(struct omap_hsmmc_host 
> *host, int errno)
>   spin_unlock(&host->irq_lock);
>  
>   if (host->use_dma && dma_ch != -1) {
> - dma_unmap_sg(mmc_dev(host->mmc), host->data->sg, host->dma_len,
> + dma_unmap_sg(mmc_dev(host->mmc), host->data->sg,
> + host->data->sg_len,
>   omap_hsmmc_get_dma_dir(host, host->data));
>   omap_free_dma(dma_ch);
>   }
> @@ -1343,7 +1344,7 @@ static void omap_hsmmc_dma_cb(int lch, u16 ch_status, 
> void *cb_data)
>   return;
>   }
>  
> - dma_unmap_sg(mmc_dev(host->mmc), data->sg, host->dma_len,
> + dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len,
>   omap_hsmmc_get_dma_dir(host, data));
>  
>   req_in_progress = host->req_in_progress;

Pushed to mmc-next for 3.0-rc, thanks.

- Chris.
-- 
Chris Ball  
One Laptop Per Child

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev