Re: Vexpress LTP questions

2010-09-23 Thread Loïc Minier
On Wed, Sep 22, 2010, Matt Waddel wrote:
> fallocate011  TFAIL  :  fallocate(5, 0, 49152, 4096) failed: 
> TEST_ERRNO=EFBIG(27): File too large
> fallocate012  TFAIL  :  fallocate(6, 1, 49152, 4096) failed: 
> TEST_ERRNO=EFBIG(27): File too large
> fallocate027  TFAIL  :  fallocate(tfile_write_15396:6, 1, 0, 4096) 
> failed, expected errno:0: TEST_ERRNO=EFBIG(27):
> File too large
> fallocate031  TFAIL  :  fallocate(tfile_sparse_15397, 0, 8192, 4096) 
> failed: TEST_ERRNO=EFBIG(27): File too large
> fallocate032  TFAIL  :  fallocate(tfile_sparse_15397, 0, 49152, 4096) 
> failed: TEST_ERRNO=EFBIG(27): File too large
> fallocate033  TFAIL  :  fallocate(tfile_sparse_15397, 0, 69632, 4096) 
> failed: TEST_ERRNO=EFBIG(27): File too large
> fallocate034  TFAIL  :  fallocate(tfile_sparse_15397, 0, 102400, 4096) 
> failed: TEST_ERRNO=EFBIG(27): File too large
> fallocate035  TFAIL  :  fallocate(tfile_sparse_15397, 1, 8192, 4096) 
> failed: TEST_ERRNO=EFBIG(27): File too large
> fallocate036  TFAIL  :  fallocate(tfile_sparse_15397, 1, 49152, 4096) 
> failed: TEST_ERRNO=EFBIG(27): File too large
> fallocate037  TFAIL  :  fallocate(tfile_sparse_15397, 1, 77824, 4096) 
> failed: TEST_ERRNO=EFBIG(27): File too large
> fallocate038  TFAIL  :  fallocate(tfile_sparse_15397, 1, 106496, 4096) 
> failed: TEST_ERRNO=EFBIG(27): File too large

 fallocate(2) says EFBIG is returned when offset+len exceeds the maximum
 file size; this would mean your fs doesn't support files that large.

 Could you try reproducing with fallocate(1) yourself?  Perhaps the
 testsuite needs to run on a specific filesystem, or should be fixed to
 not assume larger sizes than your fs supports.

> getcontext011  TFAIL  :  getcontext - Sanity test :  Fail errno=38 : 
> Function not implemented

 Looks like a missing libc implementation; you could file a bug against
 linaro-toolchain-misc to track this I guess, in any case you should
 talk to the toolchain folks I think.

> get_robust_list011  TFAIL  :  get_robust_list: retval = -1 (expected -1), 
> errno = 38 (expected 14)
> get_robust_list012  TFAIL  :  get_robust_list: retval = -1 (expected -1), 
> errno = 38 (expected 14)
> get_robust_list013  TFAIL  :  get_robust_list: retval = -1 (expected -1), 
> errno = 38 (expected 3)
> get_robust_list014  TFAIL  :  get_robust_list: retval = -1 (expected -1), 
> errno = 38 (expected 1)
> get_robust_list015  TFAIL  :  get_robust_list: retval = -1 (expected 0), 
> errno = 38 (expected 0)
> set_robust_list011  TFAIL  :  set_robust_list: retval = -1 (expected -1), 
> errno = 38 (expected 22)
> set_robust_list012  TFAIL  :  set_robust_list: retval = -1 (expected 0), 
> errno = 38 (expected 0)

 No idea what these are; you'll have to check the source and see why it
 fails

> swapon031  TFAIL  :  Failed to find out existing number of swap files: 
> errno=ENOENT(2): No such file or directory

 Do you have swap enabled in your test system?  The test procedure might
 have to be fixed to ensure that's the case.

> sysctl032  TFAIL  :  Got expected error: TEST_ERRNO=EACCES(13): 
> Permission denied

 No idea what that one is

-- 
Loïc Minier

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Linaro Infrastructure Team Weekly Report (2010-09-17 to 2010-09-23)

2010-09-23 Thread Ian Smith


The weekly report for the Linaro Infrastructure team may be found at:

Status report: 
https://wiki.linaro.org/Platform/Infrastructure/Status/2010-09-23
Burndown 
chart:http://people.canonical.com/~pitti/workitems/maverick/linaro-infrastructure.html
 


The burndown chart shows that a number of work items were completed this 
week, and there are a number of work items awaiting reviews that will 
complete shortly (pending those reviews).


 * Filed and fixed bug #639930 (Running undefined subcommands gives 
a traceback) on Abrek
 * Fixed bug #640251 (A missing file when running the testsuite in a 
virtual machine)
 * Merged entire dev branch into trunk
 * Branched and released Launch Control 0.1 (after 486 commits)
 * Asked IS to deploy it at validation.linaro.org
 * (arm-m-image-building-console): several branches in review
 * HardwarePacks: integrate hwpack-install with linaro-media-create: 
DONE

Please let me know if you have any questions.

Kind Regards,


Ian



___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Error (no boot device found) running qemu image

2010-09-23 Thread Loïc Minier
On Thu, Sep 23, 2010, Michael Hope wrote:
> To overdo things, it would be nice to mix the compiler in as well.
> I've been planning to add a test case to the compiler build that
> builds the current Ubuntu kernel and boots it inside QEMU.

 I think we should have batteries of test cases of each component before
 it sees a release, e.g. we should have a kernel build + run test before
 releasing a new toolchain, and a kernel run test before releasing a new
 QEMU, but this needs not be based on crack of the day.  It's sufficient
 to build / run the latest released kernel for these tests rather than a
 git kernel.
   In the same way, I don't think we need to build kernels out of bzr
 toolchains; the latest released toolchain would be good.

 We should get sufficient cross-testing via continuous integration of
 our releases in our development platforms (aka package repositories,
 like maverick or maverick+1).

 It's entirely possible to do as you say, but I think it's basically
 testing too many things at once resulting in unstable x unstable x
 unstable combination, rather than stable x stable x unstable where we
 can blame changes quickly.

-- 
Loïc Minier

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Rough notes from Kernel Consolidation meeting

2010-09-23 Thread Loïc Minier
On Mon, Sep 20, 2010, Arnd Bergmann wrote:
> Right. Having an intelligent file system is the only way I can see for
> getting good speedups, by avoiding erase-cycles inside the SD card,
> which commonly happen when you write to sectors at random addresses.
> 
> There has been a lot of research in optimizing for regular NAND flash,
> at least some of which should apply to SD cards as well, although
> their naive wear-levelling algorithms might easily get in the way.

 This sounds like a relatively large task; do you think that's something
 we could build on existing infrastructure like some ubi bits or some
 btrfs bits?

 I'm worried that we don't really know what the SD wear-levelling is
 doing, and it might change over time; I'm not sure whether we can
 introspect the way the controllers do it, or whether we'd have some
 fragile heuristics to decide that this or that SD card manufacturer
 uses this kind of algorithm   :-/

 Also, do we know enough about the underlying hardware to basically
 override what the manufacturer is trying to do?

-- 
Loïc Minier

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Rough notes from Kernel Consolidation meeting

2010-09-23 Thread Loïc Minier
On Mon, Sep 20, 2010, Arnd Bergmann wrote:
> > Wrt highmem: I can't see the link with highmem and SMP.  As far as I 
> > know, highmem on ARM should be SMP safe already (the only SMP related 
> > issue I've seen has been fixed in commit 831e8047eb).
> 
> Right, it's not related to SMP, I was thinking of using run-time
> patching for for both highmem and SMP though. My idea was to use
> make the decision between simply doing page_address() and the full
> kmap()/kmap_atomic() statically at boot time depending on the
> amount of memory.
> 
> I looked at the functions again, and I'm now guessing that the difference
> would be minimal because the first thing we check is (PageHighMem(page))
> on a presumably cache-hot struct page. It may be worthwhile comparing
> the performance of a highmem-enabled kernel with a regular kernel
> on a system without highmem, but it may very well be identical.

 This highmem topic comes from the fact that highmem will be needed in
 the period of time between now and LPAE where we have boards with lots
 of memory but we can't address it all without highmem (unless we want
 to revisit the 3g/1g split, but I personally think not).

 I proposed making highmem the default across all Linaro kernels as a
 way to simplify things, perhaps removing the need to bother about this
 config option altogether.
   This proposal does need some investigation on runtime performance; if
 highmem is basically free, then we're good and we can just enable it by
 default.  If it's not, I proposed we do runtime patching just like SMP
 (exactly what Arnd proposed).

 Arnd, Nicolas, would either of you take an action to benchmark the cost
 of CONFIG_HIGHMEM?   That would help understanding what kind of work
 we're looking at

Thanks!!
-- 
Loïc Minier

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Rough notes from Kernel Consolidation meeting

2010-09-23 Thread Loïc Minier
On Mon, Sep 20, 2010, Nicolas Pitre wrote:
> I'm also interested in both of those topics as 1) I participated in the 
> design of the SDIO stack (closely related to SD), and 2) I did the 
> highmem implementation for ARM.

 (You're awesome!  :-)

 When you say SDIO, you mean just SDIO or SD and SDIO?

 SDIO came up, but the main request from TSC-Ms was for _SD_ and I
 inferred that this meant SD as a mass storage.  I suspect it's getting
 common to get cheaper by getting rid of flash and using just SD in
 phones and DTVs.

 However, I didn't check whether it was "anything impacting fs on SD",
 which means we might want to look at filesystems, nor how important
 SDIO was here.  In fact, I didn't think of FSes at all, just thought
 about the throughput of the SD subsystem and specific SD backend
 drivers.

 I'll take an action to go ask the TSC-Ms about this and see what they
 really care about.

   Thanks,
-- 
Loïc Minier

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Vexpress LTP questions

2010-09-23 Thread Amit Arora
On Thu, Sep 23, 2010 at 2:24 PM, Loïc Minier  wrote:
> On Wed, Sep 22, 2010, Matt Waddel wrote:
>> fallocate01    1  TFAIL  :  fallocate(5, 0, 49152, 4096) failed: 
>> TEST_ERRNO=EFBIG(27): File too large
>> fallocate01    2  TFAIL  :  fallocate(6, 1, 49152, 4096) failed: 
>> TEST_ERRNO=EFBIG(27): File too large
>> fallocate02    7  TFAIL  :  fallocate(tfile_write_15396:6, 1, 0, 4096) 
>> failed, expected errno:0: TEST_ERRNO=EFBIG(27):
>> File too large
>> fallocate03    1  TFAIL  :  fallocate(tfile_sparse_15397, 0, 8192, 4096) 
>> failed: TEST_ERRNO=EFBIG(27): File too large
>> fallocate03    2  TFAIL  :  fallocate(tfile_sparse_15397, 0, 49152, 4096) 
>> failed: TEST_ERRNO=EFBIG(27): File too large
>> fallocate03    3  TFAIL  :  fallocate(tfile_sparse_15397, 0, 69632, 4096) 
>> failed: TEST_ERRNO=EFBIG(27): File too large
>> fallocate03    4  TFAIL  :  fallocate(tfile_sparse_15397, 0, 102400, 4096) 
>> failed: TEST_ERRNO=EFBIG(27): File too large
>> fallocate03    5  TFAIL  :  fallocate(tfile_sparse_15397, 1, 8192, 4096) 
>> failed: TEST_ERRNO=EFBIG(27): File too large
>> fallocate03    6  TFAIL  :  fallocate(tfile_sparse_15397, 1, 49152, 4096) 
>> failed: TEST_ERRNO=EFBIG(27): File too large
>> fallocate03    7  TFAIL  :  fallocate(tfile_sparse_15397, 1, 77824, 4096) 
>> failed: TEST_ERRNO=EFBIG(27): File too large
>> fallocate03    8  TFAIL  :  fallocate(tfile_sparse_15397, 1, 106496, 4096) 
>> failed: TEST_ERRNO=EFBIG(27): File too large
>
>  fallocate(2) says EFBIG is returned when offset+len exceeds the maximum
>  file size; this would mean your fs doesn't support files that large.

Correct, but the offset+length being requested here doesn't seem to be
too high (12K in some of the calls). So, its a bit surprising.

BTW, which file system is it ? From last I know, fallocate was
supported on only ext4, ocfs and xfs.

Also, if possible a printk to print the actual values of offset and
length in the kernel (in fs/open.c : do_fallocate()) might give some
pointer on whats going on.

So, something like :

/* Check for wrap through zero too */
-   if (((offset + len) > inode->i_sb->s_maxbytes) || ((offset + len) < 0))
+   if (((offset + len) > inode->i_sb->s_maxbytes) ||
+ ((offset + len) < 0)) {
+printk("sys_fallocate: offset = 0x%x, length = 0x%x
max FS size= 0x%x\n",
+offset, len, inode->i_sb->s_maxbytes);
return -EFBIG;
+}


If these numbers are higher than whats being passed by LTP test, then
there is something wrong in how ARM is handling fallocate parameters.
Else, the max file size supported by file system should be looked at
(last variable in above printk).

Regards,
Amit Arora

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Rough notes from Kernel Consolidation meeting

2010-09-23 Thread Nicolas Pitre
On Thu, 23 Sep 2010, Loïc Minier wrote:

>  When you say SDIO, you mean just SDIO or SD and SDIO?

I wrote a driver for one SD/SDIO host controller, played with the code 
for two other controllers, and wrote part of the SDIO stack.  All this 
share common infrastructure with pure SD cards.  Hence my interest in 
the topic due to the overlap, especially at the low level.  I'm less 
familiar with the filesystem issues that Arnd is mentioning though.

>  SDIO came up, but the main request from TSC-Ms was for _SD_ and I
>  inferred that this meant SD as a mass storage.  I suspect it's getting
>  common to get cheaper by getting rid of flash and using just SD in
>  phones and DTVs.

Indeed.

>  However, I didn't check whether it was "anything impacting fs on SD",
>  which means we might want to look at filesystems, nor how important
>  SDIO was here.  In fact, I didn't think of FSes at all, just thought
>  about the throughput of the SD subsystem and specific SD backend
>  drivers.

SDIO is also becoming important as this is often the preferred 
interconnect for wireless chips.


Nicolas___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Rough notes from Kernel Consolidation meeting

2010-09-23 Thread Nicolas Pitre
On Thu, 23 Sep 2010, Loïc Minier wrote:

> On Mon, Sep 20, 2010, Arnd Bergmann wrote:
> > > Wrt highmem: I can't see the link with highmem and SMP.  As far as I 
> > > know, highmem on ARM should be SMP safe already (the only SMP related 
> > > issue I've seen has been fixed in commit 831e8047eb).
> > 
> > Right, it's not related to SMP, I was thinking of using run-time
> > patching for for both highmem and SMP though. My idea was to use
> > make the decision between simply doing page_address() and the full
> > kmap()/kmap_atomic() statically at boot time depending on the
> > amount of memory.
> > 
> > I looked at the functions again, and I'm now guessing that the difference
> > would be minimal because the first thing we check is (PageHighMem(page))
> > on a presumably cache-hot struct page. It may be worthwhile comparing
> > the performance of a highmem-enabled kernel with a regular kernel
> > on a system without highmem, but it may very well be identical.
> 
>  This highmem topic comes from the fact that highmem will be needed in
>  the period of time between now and LPAE where we have boards with lots
>  of memory but we can't address it all without highmem (unless we want
>  to revisit the 3g/1g split, but I personally think not).

Note that LPAE does require highmem to be useful.  The only way highmem 
could be avoided is to move to a 64-bit architecture.

>  I proposed making highmem the default across all Linaro kernels as a
>  way to simplify things, perhaps removing the need to bother about this
>  config option altogether.
>This proposal does need some investigation on runtime performance; if
>  highmem is basically free, then we're good and we can just enable it by
>  default.  If it's not, I proposed we do runtime patching just like SMP
>  (exactly what Arnd proposed).
> 
>  Arnd, Nicolas, would either of you take an action to benchmark the cost
>  of CONFIG_HIGHMEM?   That would help understanding what kind of work
>  we're looking at

Sure.  I don't think the highmem overhead is that significant, 
especially when it doesn't kick in i.e. when total RAM is below 800MB or 
so.  But I'm skeptical about the gain that runtime patching for this 
particular case could bring.

The runtime patching of the kernel is useful for simple and 
straight-forward cases such as SMP ops which are performed in assembly.  
But in this case I'm afraid this could add even more overhead in the 
end, especially when highmem is active. But if the overhead of simply 
enabling highmem is not significant enough to be measurable in the first 
place then this is moot.


Nicolas___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Vexpress LTP questions

2010-09-23 Thread Loïc Minier
On Thu, Sep 23, 2010, Amit Arora wrote:
> BTW, which file system is it ? From last I know, fallocate was
> supported on only ext4, ocfs and xfs.

 It's probably the linaro-media-create default, ext3, which we should at
 least bump to ext4 I guess.

-- 
Loïc Minier

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Rough notes from Kernel Consolidation meeting

2010-09-23 Thread Loïc Minier
On Thu, Sep 23, 2010, Nicolas Pitre wrote:
> The runtime patching of the kernel is useful for simple and 
> straight-forward cases such as SMP ops which are performed in assembly.  
> But in this case I'm afraid this could add even more overhead in the 
> end, especially when highmem is active. But if the overhead of simply 
> enabling highmem is not significant enough to be measurable in the first 
> place then this is moot.

 Agreed; let's benchmark and decide whether it's worth thinking about
 :)

-- 
Loïc Minier

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Error (no boot device found) running qemu image

2010-09-23 Thread Christian Robottom Reis
On Thu, Sep 23, 2010 at 08:18:06AM +1200, Michael Hope wrote:
> >> I had been thinking about this as well, would you see more benefit for this
> >> to focus on testing the latest image, or qemu itself?
> >
> > I think the image (and the kernel) are going to churn at a much faster
> > rate than qemu will, so the benefit really is in early warning if the
> > kernel has suddenly tickled a latent qemu bug/missing bit of model.
> > (ie I think we'd want to do this test with the latest released qemu-maemo
> > package as well as with any new daily build of qemu.)
> 
> To overdo things, it would be nice to mix the compiler in as well.
> I've been planning to add a test case to the compiler build that
> builds the current Ubuntu kernel and boots it inside QEMU.

Let's just test the image. The toolchain and qemu will be tested as a
side-effect, but let's not lessen the benefit of ensuring our image runs
on a "released" qemu.
-- 
Christian Robottom Reis   | [+55 16] 3376 0125 | http://launchpad.net/~kiko
Canonical Ltd.| [+55 16] 9112 6430 | http://async.com.br/~kiko

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Rough notes from Kernel Consolidation meeting

2010-09-23 Thread Arnd Bergmann
On Thursday 23 September 2010, Loïc Minier wrote:
> On Mon, Sep 20, 2010, Arnd Bergmann wrote:
> > Right. Having an intelligent file system is the only way I can see for
> > getting good speedups, by avoiding erase-cycles inside the SD card,
> > which commonly happen when you write to sectors at random addresses.
> > 
> > There has been a lot of research in optimizing for regular NAND flash,
> > at least some of which should apply to SD cards as well, although
> > their naive wear-levelling algorithms might easily get in the way.
> 
>  This sounds like a relatively large task; do you think that's something
>  we could build on existing infrastructure like some ubi bits or some
>  btrfs bits?

Definitely. I wasn't suggesting we reinvent the wheel, but there may
a lot of value in comparing what's there today (logfs, ubifs, btrfs, nilfs2)
to see if any of them does the job, and possibly adding a few extensions.

The current state is mostly that people put unaligned partitions on their
SD card, stick an ext3 fs on a partition and watch performance suck
while destroying their cards.

>  I'm worried that we don't really know what the SD wear-levelling is
>  doing, and it might change over time; I'm not sure whether we can
>  introspect the way the controllers do it, or whether we'd have some
>  fragile heuristics to decide that this or that SD card manufacturer
>  uses this kind of algorithm   :-/
> 
>  Also, do we know enough about the underlying hardware to basically
>  override what the manufacturer is trying to do?

There has been a study by Thomas Gleixner on what CF cards do, which
basically showed that they all use the same broken algorithm. It may
be interesting to do the same for SD cards. We could also ask the
Samsung people in Linaro to find out more technical details than
are currently known publically about Samsung SD cards.

Arnd

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Rough notes from Kernel Consolidation meeting

2010-09-23 Thread Arnd Bergmann
On Thursday 23 September 2010, Nicolas Pitre wrote:
> >  This highmem topic comes from the fact that highmem will be needed in
> >  the period of time between now and LPAE where we have boards with lots
> >  of memory but we can't address it all without highmem (unless we want
> >  to revisit the 3g/1g split, but I personally think not).
> 
> Note that LPAE does require highmem to be useful.  The only way highmem 
> could be avoided is to move to a 64-bit architecture.

Right, I'd even say LPAE can only make things worse because people
will stick even more memory into their systems, most of which then
becomes highmem.

We might be able to use MMU features to implement a 4G/4G split, which
lets us use 3GB physical RAM or more (depending on vmalloc and I/O sizes)
without highmem, but can have an even higher cost by significantly
slowing down uaccess.

> >  Arnd, Nicolas, would either of you take an action to benchmark the cost
> >  of CONFIG_HIGHMEM?   That would help understanding what kind of work
> >  we're looking at
> 
> Sure.  I don't think the highmem overhead is that significant, 
> especially when it doesn't kick in i.e. when total RAM is below 800MB or 
> so.  But I'm skeptical about the gain that runtime patching for this 
> particular case could bring.

Yes, that's what I thought after looking into the thing in more detail.
It looked promising at first, but I now doubt it would be measurable.

Arnd

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Rough notes from Kernel Consolidation meeting

2010-09-23 Thread Loïc Minier
On Thu, Sep 23, 2010, Arnd Bergmann wrote:
> The current state is mostly that people put unaligned partitions on their
> SD card, stick an ext3 fs on a partition and watch performance suck
> while destroying their cards.

 Agreed  :-)

> There has been a study by Thomas Gleixner on what CF cards do, which
> basically showed that they all use the same broken algorithm. It may
> be interesting to do the same for SD cards. We could also ask the
> Samsung people in Linaro to find out more technical details than
> are currently known publically about Samsung SD cards.

 Ok; that sounds good, will check out with the TSC what exact SD work
 they are expecting and then we can see whether there's an FS part to
 it, but it sounds like useful research in any case

-- 
Loïc Minier

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


[PATCH 1/2] OMAP3 PM: move omap3 sleep to ddr

2010-09-23 Thread vishwanath . sripathy
From: Vishwanath BS 

There is no need to keep omap3 sleep code in SRAM. This code can be run very
well on DDR. This would help us to instrument CPUIdle latencies.

Tested on ZOOM3.

Signed-off-by: Vishwanath BS 
---
 arch/arm/mach-omap2/pm34xx.c |9 +
 1 files changed, 1 insertions(+), 8 deletions(-)

diff --git a/arch/arm/mach-omap2/pm34xx.c b/arch/arm/mach-omap2/pm34xx.c
index 85ef245..ed9d12c 100644
--- a/arch/arm/mach-omap2/pm34xx.c
+++ b/arch/arm/mach-omap2/pm34xx.c
@@ -79,8 +79,6 @@ struct power_state {
 
 static LIST_HEAD(pwrst_list);
 
-static void (*_omap_sram_idle)(u32 *addr, int save_state);
-
 static int (*_omap_save_secure_sram)(u32 *addr);
 
 static struct powerdomain *mpu_pwrdm, *neon_pwrdm;
@@ -360,9 +358,6 @@ void omap_sram_idle(void)
int core_prev_state, per_prev_state;
u32 sdrc_pwr = 0;
 
-   if (!_omap_sram_idle)
-   return;
-
pwrdm_clear_all_prev_pwrst(mpu_pwrdm);
pwrdm_clear_all_prev_pwrst(neon_pwrdm);
pwrdm_clear_all_prev_pwrst(core_pwrdm);
@@ -438,7 +433,7 @@ void omap_sram_idle(void)
 * get saved. The restore path then reads from this
 * location and restores them back.
 */
-   _omap_sram_idle(omap3_arm_context, save_state);
+   omap34xx_cpu_suspend(omap3_arm_context, save_state);
cpu_init();
 
if (is_suspending())
@@ -995,8 +990,6 @@ static int __init clkdms_setup(struct clockdomain *clkdm, 
void *unused)
 
 void omap_push_sram_idle(void)
 {
-   _omap_sram_idle = omap_sram_push(omap34xx_cpu_suspend,
-   omap34xx_cpu_suspend_sz);
if (omap_type() != OMAP2_DEVICE_TYPE_GP)
_omap_save_secure_sram = omap_sram_push(save_secure_ram_context,
save_secure_ram_context_sz);
-- 
1.7.0.4


___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


[PATCH 0/2] OMAP3 PM: sleep code clean up

2010-09-23 Thread vishwanath . sripathy
From: Vishwanath BS 

This patch series has some clean up in OMAP3 sleep code. 

Vishwanath BS (2):
  OMAP3 PM: move omap3 sleep to ddr
  OMAP3 PM: sleep code clean up

 arch/arm/mach-omap2/pm34xx.c  |9 +-
 arch/arm/mach-omap2/sleep34xx.S   |  375 ++---
 arch/arm/plat-omap/include/plat/control.h |2 +
 3 files changed, 189 insertions(+), 197 deletions(-)


___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


[PATCH 2/2] OMAP3 PM: sleep code clean up

2010-09-23 Thread vishwanath . sripathy
From: Vishwanath BS 

This patch has done some clean up of omap3 sleep code.
Basically all possible hardcodings are removed.
Reorganized code into more logical buckets for better readability.

Tested on ZOOM3.

Signed-off-by: Vishwanath BS 
---
 arch/arm/mach-omap2/sleep34xx.S   |  375 ++---
 arch/arm/plat-omap/include/plat/control.h |2 +
 2 files changed, 188 insertions(+), 189 deletions(-)

diff --git a/arch/arm/mach-omap2/sleep34xx.S b/arch/arm/mach-omap2/sleep34xx.S
index ba53191..845da09
--- a/arch/arm/mach-omap2/sleep34xx.S
+++ b/arch/arm/mach-omap2/sleep34xx.S
@@ -33,17 +33,20 @@
 #include "prm.h"
 #include "sdrc.h"
 
-#define SDRC_SCRATCHPAD_SEM_V  0xfa00291c
+#define SDRC_SCRATCHPAD_SEM_OFFS   0xc
+#define SDRC_SCRATCHPAD_SEM_V  OMAP343X_SCRATCHPAD_REGADDR \
+   
(SDRC_SCRATCHPAD_SEM_OFFS)
 
 #define PM_PREPWSTST_CORE_VOMAP34XX_PRM_REGADDR(CORE_MOD, \
-   OMAP3430_PM_PREPWSTST)
-#define PM_PREPWSTST_CORE_P0x48306AE8
+   OMAP3430_PM_PREPWSTST)
+#define PM_PREPWSTST_CORE_POMAP3430_PRM_BASE + CORE_MOD + \
+   
OMAP3430_PM_PREPWSTST
 #define PM_PREPWSTST_MPU_V OMAP34XX_PRM_REGADDR(MPU_MOD, \
OMAP3430_PM_PREPWSTST)
 #define PM_PWSTCTRL_MPU_P  OMAP3430_PRM_BASE + MPU_MOD + OMAP2_PM_PWSTCTRL
 #define CM_IDLEST1_CORE_V  OMAP34XX_CM_REGADDR(CORE_MOD, CM_IDLEST1)
 #define SRAM_BASE_P0x4020
-#define CONTROL_STAT   0x480022F0
+#define CONTROL_STAT   OMAP343X_CTRL_BASE + OMAP343X_CONTROL_STATUS
 #define SCRATCHPAD_MEM_OFFS0x310 /* Move this as correct place is
   * available */
 #define SCRATCHPAD_BASE_P  (OMAP343X_CTRL_BASE + OMAP343X_CONTROL_MEM_WKUP\
@@ -184,29 +187,16 @@ api_params:
 ENTRY(save_secure_ram_context_sz)
.word   . - save_secure_ram_context
 
-/*
- * Forces OMAP into idle state
- *
- * omap34xx_suspend() - This bit of code just executes the WFI
- * for normal idles.
- *
- * Note: This code get's copied to internal SRAM at boot. When the OMAP
- *  wakes up it continues execution at the point it went to sleep.
- */
-ENTRY(omap34xx_cpu_suspend)
+/* Function to execute WFI. When the MPU wakes up from retention
+ * or inactive mode, it continues execution just after wfi */
+ENTRY(omap34xx_do_wfi)
stmfd   sp!, {r0-r12, lr}   @ save registers on stack
-loop:
-   /*b loop*/  @Enable to debug by stepping through code
-   /* r0 contains restore pointer in sdram */
-   /* r1 contains information about saving context */
+
ldr r4, sdrc_power  @ read the SDRC_POWER register
ldr r5, [r4]@ read the contents of SDRC_POWER
orr r5, r5, #0x40   @ enable self refresh on idle req
str r5, [r4]@ write back to SDRC_POWER register
 
-   cmp r1, #0x0
-   /* If context save is required, do that and execute wfi */
-   bne save_context_wfi
/* Data memory barrier and Data sync barrier */
mov r1, #0
mcr p15, 0, r1, c7, c10, 4
@@ -225,8 +215,182 @@ loop:
nop
nop
bl wait_sdrc_ok
+   ldmfd   sp!, {r0-r12, pc}   @ restore regs and return
+
+/*
+ * Forces OMAP into idle state
+ *
+ * omap34xx_cpu_suspend() - This bit of code just executes the WFI
+ * for normal idles and saves the context before WFI on off modes.
+ *
+ */
+
+ENTRY(omap34xx_cpu_suspend)
+   stmfd   sp!, {r0-r12, lr}   @ save registers on stack
+loop:
+   /*b loop*/  @Enable to debug by stepping through code
+   /* r0 contains restore pointer in sdram */
+   /* r1 contains information about saving context */
+
+   cmp r1, #0x0
+   /* If context save is required, do that and execute wfi */
+   bne save_context_wfi
+   bl omap34xx_do_wfi
 
ldmfd   sp!, {r0-r12, pc}   @ restore regs and return
+
+save_context_wfi:
+   /*b save_context_wfi*/  @ enable to debug save code
+   mov r8, r0 /* Store SDRAM address in r8 */
+   mrc p15, 0, r5, c1, c0, 1   @ Read Auxiliary Control Register
+   mov r4, #0x1@ Number of parameters for restore call
+   stmia   r8!, {r4-r5}@ Push parameters for restore call
+   mrc p15, 1, r5, c9, c0, 2   @ Read L2 AUX ctrl register
+   stmia   r8!, {r4-r5}@ Push parameters for restore call
+/* Check what that target sleep state is:stored in r1*/
+/* 1 - Only L1 and logic lost */
+/* 2 - Only L2 lost */
+/* 3 - Both L1 and L2 lost */
+   cmp r1, #0x2 /* Only L2 lost */
+   beq clean_l2
+   cmp r1, #0x1 /* L2 retained */
+   /* r9 stores w

Re: Rough notes from Kernel Consolidation meeting

2010-09-23 Thread Catalin Marinas
On Thu, 2010-09-23 at 16:15 +0100, Arnd Bergmann wrote:
> On Thursday 23 September 2010, Nicolas Pitre wrote:
> > >  This highmem topic comes from the fact that highmem will be needed in
> > >  the period of time between now and LPAE where we have boards with lots
> > >  of memory but we can't address it all without highmem (unless we want
> > >  to revisit the 3g/1g split, but I personally think not).
> >
> > Note that LPAE does require highmem to be useful.  The only way highmem
> > could be avoided is to move to a 64-bit architecture.
> 
> Right, I'd even say LPAE can only make things worse because people
> will stick even more memory into their systems, most of which then
> becomes highmem.

If you really need so much memory, it's more efficient to have LPAE
+highmem than a swap device. The problem is if the OS doesn't need so
much memory but it is available, Linux tries to allocate from highmem
first. What could help is a different zone fall back mechanism trying to
allocate from lowmem up to a certain threshold.

Another option would be to use the highmem for hosting a swap via some
form of ramdisk or slram/phram.

Yet another option is some dynamic memory hotplug based on the amount of
spare memory you've got.

> We might be able to use MMU features to implement a 4G/4G split, which
> lets us use 3GB physical RAM or more (depending on vmalloc and I/O sizes)
> without highmem, but can have an even higher cost by significantly
> slowing down uaccess.

It would be tricky to create temporary mappings for uaccess (and may
involve get_user_pages or some form of pinning the pages in memory).

-- 
Catalin


___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Rough notes from Kernel Consolidation meeting

2010-09-23 Thread Arnd Bergmann
On Thursday 23 September 2010 19:03:42 Catalin Marinas wrote:
> On Thu, 2010-09-23 at 16:15 +0100, Arnd Bergmann wrote:
> > On Thursday 23 September 2010, Nicolas Pitre wrote:
> > > >  This highmem topic comes from the fact that highmem will be needed in
> > > >  the period of time between now and LPAE where we have boards with lots
> > > >  of memory but we can't address it all without highmem (unless we want
> > > >  to revisit the 3g/1g split, but I personally think not).
> > >
> > > Note that LPAE does require highmem to be useful.  The only way highmem
> > > could be avoided is to move to a 64-bit architecture.
> > 
> > Right, I'd even say LPAE can only make things worse because people
> > will stick even more memory into their systems, most of which then
> > becomes highmem.
> 
> If you really need so much memory, it's more efficient to have LPAE
> +highmem than a swap device. The problem is if the OS doesn't need so
> much memory but it is available, Linux tries to allocate from highmem
> first. What could help is a different zone fall back mechanism trying to
> allocate from lowmem up to a certain threshold.
> 
> Another option would be to use the highmem for hosting a swap via some
> form of ramdisk or slram/phram.
> 
> Yet another option is some dynamic memory hotplug based on the amount of
> spare memory you've got.

Right. Unfortunately all of these ideas depend a lot on the workload
you actually want to run. For the general case, highmem is probably the
best we can do.

If you know you have at most 2GB of memory, the 2g/2g split is also
an interesting option for many workloads.

Yet another variant of your phram swap is to use compressed swap,
whatever that is called nowadays.

> > We might be able to use MMU features to implement a 4G/4G split, which
> > lets us use 3GB physical RAM or more (depending on vmalloc and I/O sizes)
> > without highmem, but can have an even higher cost by significantly
> > slowing down uaccess.
> 
> It would be tricky to create temporary mappings for uaccess (and may
> involve get_user_pages or some form of pinning the pages in memory).

That's what I meant with expensive. You end up with get_user turning
into

get_user_pages_fast()
kmap_atomic()
memcpy()
kunmap_atomic()
put_page()

The get_user_pages_fast is bad enough, but if you're unfortunate enough
to still require highmem, the kmap_atomic is going to hurt even more.

With a pure 4G/4G split and highmem disabled, it may be worth trying,
but again it will depend on the workload if there is anything to gain here.


Arnd

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Rough notes from Kernel Consolidation meeting

2010-09-23 Thread Nicolas Pitre
On Thu, 23 Sep 2010, Catalin Marinas wrote:

> On Thu, 2010-09-23 at 16:15 +0100, Arnd Bergmann wrote:
> > On Thursday 23 September 2010, Nicolas Pitre wrote:
> > > >  This highmem topic comes from the fact that highmem will be needed in
> > > >  the period of time between now and LPAE where we have boards with lots
> > > >  of memory but we can't address it all without highmem (unless we want
> > > >  to revisit the 3g/1g split, but I personally think not).
> > >
> > > Note that LPAE does require highmem to be useful.  The only way highmem
> > > could be avoided is to move to a 64-bit architecture.
> > 
> > Right, I'd even say LPAE can only make things worse because people
> > will stick even more memory into their systems, most of which then
> > becomes highmem.
> 
> If you really need so much memory, it's more efficient to have LPAE
> +highmem than a swap device. The problem is if the OS doesn't need so
> much memory but it is available, Linux tries to allocate from highmem
> first. What could help is a different zone fall back mechanism trying to
> allocate from lowmem up to a certain threshold.

Beware the subtlety here.  The kernel will target highmem first for user 
space allocations, as this is in most cases memory that the kernel won't 
have to touch.  Typically you get user memory populated with 
application code and data through DMA and the kernel doesn't have to 
kmap() those pages.  Even swapping user space pages doesn't require that 
the kernel see the content of those pages.  But that works out _only_ if 
IO is performed through DMA, and that DMA can be done on the full 
physical address range.  As soon as you need to bounce data into lowmem 
you start to lose.

Also when highmem is involved, the proportion of low pages vs high pages 
becomes quickly small (more than 3 times as many highmem pages than 
lowmem pages when there is 4G of RAM), and lowmem pages become a sparse 
resource.  It is normal in that case to favor highmem page allocations 
as much as possible.

> Another option would be to use the highmem for hosting a swap via some
> form of ramdisk or slram/phram.

This is useless when highmem is allocated to user space.  Better to 
simply allocate user space in highmem directly and do nothing else than 
swapping between page tables on context switch.


Nicolas

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Refactoring linaro media create

2010-09-23 Thread Guilherme Salgado
Hi,

I've recently changed l-m-c to take an hwpack, which is installed before
the image is built.  To keep that change simple I had to unpack the
binary tarball into a tmp directory, install the hwpack, repack the
tarball and then continue with the image generation.  This extra
unpacking/repacking of the binary tarball is obviously not ideal, so I'm
now refactoring l-m-c to make it possible to avoid it when installing an
hwpack.

The one thing I'm not sure about is whether I should 

 a) unpack the binary tarball to a tmp dir, install the hwpack (which
may cause lots of data to be written) and then move that to the SD card
or
 b) unpack the tarball straight into the sd card (as is done currently)
and then install the hwpack

I'm leaning towards a) because of the poor write speed of SD cards, but
if anybody knows of any reasons why I should go with b) now's the time
to tell me. :)

Cheers,

-- 
Guilherme Salgado 



___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Refactoring linaro media create

2010-09-23 Thread Guilherme Salgado
Hi,

I've recently changed l-m-c to take an hwpack, which is installed before
the image is built.  To keep that change simple I had to unpack the
binary tarball into a tmp directory, install the hwpack, repack the
tarball and then continue with the image generation.  This extra
unpacking/repacking of the binary tarball is obviously not ideal, so I'm
now refactoring l-m-c to make it possible to avoid it when installing an
hwpack.

The one thing I'm not sure about is whether I should 

 a) unpack the binary tarball to a tmp dir, install the hwpack (which
may cause lots of data to be written) and then move that to the SD card
or
 b) unpack the tarball straight into the sd card (as is done currently)
and then install the hwpack

I'm leaning towards a) because of the poor write speed of SD cards, but
if anybody knows of any reasons why I should go with b) now's the time
to tell me. :)

Cheers,

-- 
Guilherme Salgado 


signature.asc
Description: This is a digitally signed message part
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: [PATCH 0/2] OMAP3 PM: sleep code clean up

2010-09-23 Thread Amit Kucheria
Vishwa,

On 10 Sep 23, Vishwanath Sripathy wrote:
> From: Vishwanath BS 
>
> This patch series has some clean up in OMAP3 sleep code. 
> 
> Vishwanath BS (2):
>   OMAP3 PM: move omap3 sleep to ddr
>   OMAP3 PM: sleep code clean up
> 
>  arch/arm/mach-omap2/pm34xx.c  |9 +-
>  arch/arm/mach-omap2/sleep34xx.S   |  375 
> ++---
>  arch/arm/plat-omap/include/plat/control.h |2 +
>  3 files changed, 189 insertions(+), 197 deletions(-)
> 

What tree are you working against? These patches fail to apply cleanly
against Linus' 2.6.36-rc4 or against linux-omap master. There is some fuzz
when using patch.

Regards,
Amit

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: [PATCH 0/2] OMAP3 PM: sleep code clean up

2010-09-23 Thread Vishwanath Sripathy
Amit,

> On Fri, Sep 24, 2010 at 12:13 PM, Amit Kucheria 
> wrote:
> Vishwa,
>
>
> > On 10 Sep 23, Vishwanath Sripathy wrote:
> > From: Vishwanath BS 
> >
> > This patch series has some clean up in OMAP3 sleep code.
> >
> > Vishwanath BS (2):
> >   OMAP3 PM: move omap3 sleep to ddr
> >   OMAP3 PM: sleep code clean up
> >
> >  arch/arm/mach-omap2/pm34xx.c  |9 +-
> >  arch/arm/mach-omap2/sleep34xx.S   |  375
> ++---
> >  arch/arm/plat-omap/include/plat/control.h |2 +
> >  3 files changed, 189 insertions(+), 197 deletions(-)
> >
>
> > What tree are you working against? These patches fail to apply cleanly
> > against Linus' 2.6.36-rc4 or against linux-omap master. There is some
> fuzz
> > when using patch.
> I have based these patches on top of latest Kevin's pm branch.
>

Vishwa

> > Regards,
> > Amit
>
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev