Viktor Turskyi wrote:
Is there any performance problem with hard  links in ZFS?
I have a large storage. There will be near 50000 hard links for every file. Is it ok for ZFS? May be some problems with snapshots(every 30 minutes there will be a snapshot creating)? What about difference in speed while working with 50000 hardlinks or 50000 different files?
ps: It would be very useful if you give me some links about hardlinks low-level 
processing.
This message posted from opensolaris.org


On my 2 Ghz opteron w/ 2 mirrored zfs disks:


cyber% cat test.c
#include <unistd.h>
#include <stdlib.h>
#include <stdio.h>
#include <strings.h>

int
main(int argc, char *argv[])
{
        int i;
        char *filename;
        char buffer[1024];

        if (argc != 2) {
                fprintf(stderr, "usage: %s filename\n", argv[0]);
                exit(1);
        }

        strcpy(buffer, argv[1]);
        filename = buffer + strlen(filename);

        for (i = 0; i < 50000; i++) {
                sprintf(filename, "_%d\n", i);
                if(link("foo",  buffer) < 0) {
                        perror("link:");
                        exit(1);
                }
        }
}

cyber% ls
test    test.c
cyber% cc -o test test.c
cyber% mkfile 10k foo
cyber% /bin/ptime ./test foo

real        0.976
user        0.039
sys         0.936
cyber% ls | wc
  100003   50003  538906
cyber% /bin/ptime rm foo_*

real        1.869
user        0.110
sys         1.757
cyber%

So it takes just under 1 second to create 50,000 hardlinks to a file; it takes just under 2 seconds to delete 'em w/ rm. It would prob. be faster to use a program to delete them.

- Bart


--
Bart Smaalders                  Solaris Kernel Performance
[EMAIL PROTECTED]               http://blogs.sun.com/barts
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to