USB Flash Drive Quality
Ah, I found a much faster and reliable method
dd an iso to the usb :
It gives the same results as zeroing the whole stick, but is much faster .
And it doesnt depend on the file system.
NOTE : DONT USE dd ON A USB STICK WITH IMPORTANT DATA !!!
ALWAYS BACKUP YOUR DATA !!!!
dd an iso to the usb :
Code: Select all
# dd if=xerus64-8.3.iso of=/dev/sdb
856064+0 records in
856064+0 records out
438304768 bytes (438 MB) copied, 191,186 s, 2,3 MB/s
And it doesnt depend on the file system.
NOTE : DONT USE dd ON A USB STICK WITH IMPORTANT DATA !!!
ALWAYS BACKUP YOUR DATA !!!!
Last edited by linuxcbon on Mon 31 Jul 2017, 18:55, edited 1 time in total.
- BarryK
- Puppy Master
- Posts: 9392
- Joined: Mon 09 May 2005, 09:23
- Location: Perth, Western Australia
- Contact:
You will at least need to add "conv=fdatasync", or if using busybox dd that doesn't work, have to use "conv=fsync".linuxcbon wrote:Hi all,
I have a remark : isnt zeroing the whole stick more reliable ?
Until now, your results depend on the file system and maybe other factors...
Why not just do a # dd if=/dev/zero of=/dev/sdb
I dont know how long it takes with huge sized stick. Mine is 2GB so it is done quickly , and I got :It is a very crappy USB 2.0 stick.Code: Select all
# dd if=/dev/zero of=/dev/sdb dd: writing to `/dev/sdb': No space left on device 4137985+0 records in 4137984+0 records out 2118647808 bytes (2,1 GB) copied, 823,927 s, 2,6 MB/s
Without it, dd will finish before completely flushing data to the drive, and that could be a lot of data.
Also recommend that you specify a large block size, as the flash technology prefers it, say "bs=1M"
And while you are at it, append "oflag=direct", which, I am not sure, but think that it may bypass the flash drive cache.
Last edited by BarryK on Mon 31 Jul 2017, 00:23, edited 1 time in total.
[url]https://bkhome.org/news/[/url]
Ouch, that was a surprisingly big difference between fat32, ext2 and ext4! Thank you for testing, Barry! I use ext2, and I usually only do back up to the flash sticks. When I'm finished with a session and do the backups, I am really not very concerned about the upload speed, my files are usually not that big.BarryK wrote:Here are more tests:
http://barryk.org/news/?viewDetailed=00631
The question that was asked whether type of filesystem makes a difference, it does!
OK, let's do a test:linuxcbon wrote:Ah, I found a much faster and reliable method
dd an iso to the usb :
Code: Select all
# dd if=custom-puppy.iso of=/dev/sdb1
470984+0 records in
470984+0 records out
241143808 bytes (241 MB) copied, 87.0704 s, 2.8 MB/s
tallboy
True freedom is a live Puppy on a multisession CD/DVD.
And i'm wondering if this is a real point factor in measuring and the results ocurring, especially if there is or isn't a cache to start with. Is it the cache speed and size capacity affecting your measuring, rather than the overall performance of the flash drive? Maybe different blocksizes would result in different speeds with the cache being filled before real memory writeout is completed.BarryK wrote: And while you are at it, append "oflag=direct", which, I am not sure, but think that it may bypass the flash drive cache.
I'd like to see some re-testing with and without this setting. I also wonder if there is someway to find out how much cache is in there and what it's specs are as i'm sure it would also affect the report.
I's also like to see what happens if you did it to a so called 'unformatted' drive partition where it's writing 'raw'. That's suppose to be the fastest for a Hard drive(, but easiest to corrupt,) so needs software to support the partition table, as we use to do with some of our largest Netware Servers.
Uh, uh!
I must have made a mistake, I need some help here!
The last test, from post above:
I wrote /dev/sdb1 instead of /mnt/sdb1.
I tried to correct the test, but; What?
This is strange, to say it mildly, it is still my SanDisk Cruzer Edge 32 Gb that used to have 15 Gb free space, now it has 6 empty directories -still with names - and one empty file. Wow, 16Terabyte?? One more try:
That must be a new world record!
I changed the names of directories and files just for the forum, they are all empty.
What the hell happened?
What should I do now? Can I run fsck to find some lost files? Or umount all?
tallboy
I must have made a mistake, I need some help here!
The last test, from post above:
Code: Select all
# dd if=custom-puppy.iso of=/dev/sdb1
470984+0 records in
470984+0 records out
241143808 bytes (241 MB) copied, 87.0704 s, 2.8 MB/s
I tried to correct the test, but;
Code: Select all
# dd if=custom-puppy.iso of=/mnt/sdb1/dummyfile
dd: opening `/mnt/sdb1/dummyfile': No space left on device
This is strange, to say it mildly, it is still my SanDisk Cruzer Edge 32 Gb that used to have 15 Gb free space, now it has 6 empty directories -still with names - and one empty file.
Code: Select all
# df -h /mnt/sdb1/
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 16T 16T 0 100% /mnt/sdb1
Code: Select all
# df /mnt/sdb1/
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdb1 17179387340 17179387340 0 100% /mnt/sdb1
Code: Select all
# ls -Ahov /dev/sdb1
brw-rw---- 1 root 8, 17 2017-07-31 02:28 /dev/sdb1
Code: Select all
# ls -Ahov /mnt/sdb1
total 40K
drwxr-xr-x 3 root 4.0K 2017-04-27 22:55 dirname
drwxr-xr-x 17 root 4.0K 2016-05-04 18:39 dirname
drwx------ 2 root 16K 2015-05-15 05:34 lost+found
drwxr-xr-x 21 root 4.0K 2017-07-13 06:51 dirname
drwxr-xr-x 3 root 4.0K 2017-05-29 13:56 dirname
-rw-r--r-- 1 root 3.3K 2015-05-17 06:08 filename.txt
drwxr-xr-x 10 root 4.0K 2016-08-10 03:23 dirname
What the hell happened?
What should I do now? Can I run fsck to find some lost files? Or umount all?
tallboy
- Attachments
-
- 16gig-flash.jpg
- (43.62 KiB) Downloaded 320 times
True freedom is a live Puppy on a multisession CD/DVD.
- BarryK
- Puppy Master
- Posts: 9392
- Joined: Mon 09 May 2005, 09:23
- Location: Perth, Western Australia
- Contact:
That will have destroyed the content of partition sdb1.tallboy wrote:Uh, uh!
I must have made a mistake, I need some help here!
The last test, from post above:Code: Select all
# dd if=custom-puppy.iso of=/dev/sdb1 470984+0 records in 470984+0 records out 241143808 bytes (241 MB) copied, 87.0704 s, 2.8 MB/s
You will have to reformat it with ext4 or whatever.
[url]https://bkhome.org/news/[/url]
Ha, ha, it's not wiped clean!
But the directory names just hanged there in the previous terminal window., they are gone now.
I unmounted all partitions, and remounted it, and I now have 230Mb of Puppy on my SanDisk!
But, when I hower the mouse over the flash icon, it says filesystem iso9660, size 29.1 Gb!
When I ask with df:
Darn, I lost almost 16 Tb!
tallboy
But the directory names just hanged there in the previous terminal window., they are gone now.
I unmounted all partitions, and remounted it, and I now have 230Mb of Puppy on my SanDisk!
But, when I hower the mouse over the flash icon, it says filesystem iso9660, size 29.1 Gb!
When I ask with df:
Code: Select all
# df -h /mnt/sdb1/
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 230M 230M 0 100% /mnt/sdb1
tallboy
True freedom is a live Puppy on a multisession CD/DVD.
Haha, I guess you are right about that, rcrsn51. I have done some incredibly stupid things with Linux over the years, and almost all could have been avoided if I just had read the manual first!
I'm running off a live-CD, so I guess I'll restart the Puppy, to be on the safe side. I'll wipe the stick clean now, but I think it's better to format it from a fresh Puppy!
It's one of the advantages of a live Puppy, so you see I have learnt something from all my mistakes!
tallboy
I'm running off a live-CD, so I guess I'll restart the Puppy, to be on the safe side. I'll wipe the stick clean now, but I think it's better to format it from a fresh Puppy!
It's one of the advantages of a live Puppy, so you see I have learnt something from all my mistakes!
tallboy
True freedom is a live Puppy on a multisession CD/DVD.
I dont know where you get all that from, and if this is important or mandatory for the testing, I will look at the dd manual.BarryK wrote:You will at least need to add "conv=fdatasync", or if using busybox dd that doesn't work, have to use "conv=fsync".
Without it, dd will finish before completely flushing data to the drive, and that could be a lot of data.
Also recommend that you specify a large block size, as the flash technology prefers it, say "bs=1M"
And while you are at it, append "oflag=direct", which, I am not sure, but think that it may bypass the flash drive cache.
But one thing is sure : your method is not reliable as it depends on the filesystem.
You didnt read the command correctly, it is sdb and not sdb1.tallboy wrote:Haha, I guess you are right about that, rcrsn51. I have done some incredibly stupid things with Linux over the years, and almost all could have been avoided if I just had read the manual first!
I'm running off a live-CD, so I guess I'll restart the Puppy, to be on the safe side. I'll wipe the stick clean now, but I think it's better to format it from a fresh Puppy!
It's one of the advantages of a live Puppy, so you see I have learnt something from all my mistakes!
tallboy
I have updated my message with this :
NOTE : DONT USE dd ON A USB STICK WITH IMPORTANT DATA !!!
ALWAYS BACKUP YOUR DATA !!!!
Ok I have read some dd manual and other pages :BarryK wrote:You will at least need to add "conv=fdatasync", or if using busybox dd that doesn't work, have to use "conv=fsync".
Without it, dd will finish before completely flushing data to the drive, and that could be a lot of data.
Also recommend that you specify a large block size, as the flash technology prefers it, say "bs=1M"
And while you are at it, append "oflag=direct", which, I am not sure, but think that it may bypass the flash drive cache.
- ‘fdatasync’ "Synchronize output data just before finishing. This forces a physical write of output data. "
---> it is the same as a "sync" after the dd command.
- ‘bs=bytes’ "Set both input and output block sizes to bytes."
‘count=n’ "Copy n ‘ibs’-byte blocks from the input file,"
---> It is mostly used for writing many small files , as in bs=1M count=1024 , which writes 1024 files of 1Mb.
You dont need it if you write one big ISO file.
- "oflag=direct" "Use direct I/O for data, avoiding the buffer cache. Note that the kernel may impose restrictions on read or write buffer sizes. For example, with an ext4 destination file system and a Linux-based kernel, using ‘oflag=direct’ will cause writes to fail with EINVAL if the output buffer size is not a multiple of 512. "
---> So it is not recommended or needed.
A plain and simple "dd if=puppy.iso of=/dev/sdb ; sync" is enough.
I noticed that many people prefer to test the write speed like this, on a mounted filesystem, so it depends on ext4, ext2, fat etc. :
# dd if=/dev/zero of=/mnt/sdb1/testfile bs=1M count=1024
# sync
# rm /mnt/sdb1/testfile
- BarryK
- Puppy Master
- Posts: 9392
- Joined: Mon 09 May 2005, 09:23
- Location: Perth, Western Australia
- Contact:
NO, it is not, dd will complete before flushing all data to the drive.linuxcbon wrote: A plain and simple "dd if=puppy.iso of=/dev/sdb ; sync" is enough.
Also, writing to the filesystem on the drive is precisely what I want to test. Not raw writing to the drive.
scsijon,
Yes, "oflag=direct" does make a difference, I tested on one flash stick, and got a slower transfer rate with it. So, it did appear to be doing what the docs state, bypassing the cache in the drive.
I did that test before my original post to this thread and blog.
[url]https://bkhome.org/news/[/url]
That's what I said, you didn't get it . dd always completes before really writing, you cannot change that, except with "oflag=direct".BarryK wrote:NO, it is not, dd will complete before flushing all data to the drive.
Now it depends on what you want to test, which flags you prefer, that will give different results.
OK then you didn't specify which file systems to test on or not, and this will also give different results.BarryK wrote:Also, writing to the filesystem on the drive is precisely what I want to test. Not raw writing to the drive.
- BarryK
- Puppy Master
- Posts: 9392
- Joined: Mon 09 May 2005, 09:23
- Location: Perth, Western Australia
- Contact:
Have reconsidered, need to give an explanation why I backed out of this thread.
I was getting upset with the posts from linuxcbon. Then when tallboy erased his flash drive accidentally, using an example from linuxcbon, I had had enough.
The 'dd' utility is very powerful, and if you are not very careful, a drive or partition can get destroyed.
In my examples, dd has parameter "of=/mnt/sd*", for example "dd of=/mnt/sdc2", which is relatively safe as it is writing to a mount-point. And if for some reason the partition is not mounted there, then it is still a fairly ok situation.
If you have "of=/dev/sd*" then you have to be very very careful, and understand exactly what you are doing.
The other thing that annoyed me about the posts from linuxcbon, is he was using dd without all the extra parameters, for example "bs=1M count=1024 conv=fdatasync oflag=direct".
These are the result of considerable thought and prior knowledge. Setting the block size to 1M (or higher) for example, is recommended for efficient writing to flash.
A little note about linuxcbon himself. He has annoyed a lot of people over the years, with what many consider to be nit-picking and trivial criticisms. However, I have found his feedback to be very helpful, sometimes he unearths things that have been overlooked.
So, I value his input, and have sent a pm to him explaining that.
I was getting upset with the posts from linuxcbon. Then when tallboy erased his flash drive accidentally, using an example from linuxcbon, I had had enough.
The 'dd' utility is very powerful, and if you are not very careful, a drive or partition can get destroyed.
In my examples, dd has parameter "of=/mnt/sd*", for example "dd of=/mnt/sdc2", which is relatively safe as it is writing to a mount-point. And if for some reason the partition is not mounted there, then it is still a fairly ok situation.
If you have "of=/dev/sd*" then you have to be very very careful, and understand exactly what you are doing.
The other thing that annoyed me about the posts from linuxcbon, is he was using dd without all the extra parameters, for example "bs=1M count=1024 conv=fdatasync oflag=direct".
These are the result of considerable thought and prior knowledge. Setting the block size to 1M (or higher) for example, is recommended for efficient writing to flash.
A little note about linuxcbon himself. He has annoyed a lot of people over the years, with what many consider to be nit-picking and trivial criticisms. However, I have found his feedback to be very helpful, sometimes he unearths things that have been overlooked.
So, I value his input, and have sent a pm to him explaining that.
[url]https://bkhome.org/news/[/url]
Sandisk USB
Hi
Thought, since I have just got a couple on new USB sticks I would join the discussion and tell you about mine. I bought all they had in the store since they are Sandisk, Ultra Flair, USB Flash Drive, 64GB which I bought for £5.45 each. On the packet it says speed up to 150 MB/s and that will be a read speed under optimum conditions.
I used my Acer C720 Chromebook which has Slacko 6.3.2 loaded on it and the machine has a USB 3 port and a USB 2 port. I did the test with the original vfat file system on it. Results are:
USB 3 Port
dd if=/dev/zero of=/mnt/sdb1/dummyfile bs=1M count=1024 conv=fdatasync ofla>
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 19.8114 s, 54.2 MB/s
USB 2 Port
dd if=/dev/zero of=/mnt/sdb2/dummyfile bs=1M count=1024 conv=fdatasync oflag=direct
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 38.3976 s, 28.0 MB/s
dd if=/dev/zero of=/mnt/sdb1/dummyfile bs=1M count=1024 conv=fdatasync oflag=direct
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 37.385 s, 28.7 MB/s
It shows that even on a USB 2 port the USB 3 sticks perform well[/b]
Thought, since I have just got a couple on new USB sticks I would join the discussion and tell you about mine. I bought all they had in the store since they are Sandisk, Ultra Flair, USB Flash Drive, 64GB which I bought for £5.45 each. On the packet it says speed up to 150 MB/s and that will be a read speed under optimum conditions.
I used my Acer C720 Chromebook which has Slacko 6.3.2 loaded on it and the machine has a USB 3 port and a USB 2 port. I did the test with the original vfat file system on it. Results are:
USB 3 Port
dd if=/dev/zero of=/mnt/sdb1/dummyfile bs=1M count=1024 conv=fdatasync ofla>
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 19.8114 s, 54.2 MB/s
USB 2 Port
dd if=/dev/zero of=/mnt/sdb2/dummyfile bs=1M count=1024 conv=fdatasync oflag=direct
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 38.3976 s, 28.0 MB/s
dd if=/dev/zero of=/mnt/sdb1/dummyfile bs=1M count=1024 conv=fdatasync oflag=direct
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 37.385 s, 28.7 MB/s
It shows that even on a USB 2 port the USB 3 sticks perform well[/b]