Bunch of cf cards in raid 0????

For stuff that really doesn't have ANYTHING to do with Puppy
Post Reply
Message
Author
User avatar
Runemaster
Posts: 180
Joined: Sat 05 Aug 2006, 04:41
Location: Albany, GA U.S.

Bunch of cf cards in raid 0????

#1 Post by Runemaster »

Ok it is my perceived plan to get 4 either 8 or 16gb cf cards BUT they need to be at least 233x or higher and get 4 cf to sata adapters and run them in software raid 0 (since i cant afford a true raid card with its own dedicated processor and cache and there would be no point in buying just a standard sata controller if ones already built into the motherboard).

questions, suggestions, comments or concerns???

I would like only constructive criticism please none of that ranting and raving bullcrap about how im doing it all wrong or something....thank you!!!
Adventurer: I seek knowledge and strength.

Seer:Knowledge comes from experience.....Strength comes from battleaxes.

User avatar
Crash
Posts: 453
Joined: Fri 09 Dec 2005, 06:34
Location: Melbourne, FL

#2 Post by Crash »

I would be concerned about the write endurance, since I assume you would write to it once in a while. Otherwise, it looks like a good experiment.

It depends what you want to do with it. There is a product that has been out for a couple of years that makes any kind of RAID look sick in terms of performance:

http://www.tomshardware.com/2005/09/07/can_gigabyte/

It is a solid state disk that uses DDR RAM chips and connects on an SATA port. Battery backed up. The benchmarks are awesome.

The first experience that I had with a RAM drive was a while back. It was a 256KB DRAM board on an S100 bus with a 2.5 MHz Z80. It was one of the fastest word processors that I have used to this day. Not WYSIWYG, but practically instantaneous.

User avatar
Runemaster
Posts: 180
Joined: Sat 05 Aug 2006, 04:41
Location: Albany, GA U.S.

#3 Post by Runemaster »

The cards I'm looking at have 1 million hours MTBF and wear leveling. everyones so worried about flash chips burning out. how many times have you heard of a flash chip burning out because its at the end of its life. I know everyone would be concerned because they are used to hold personal and valuable data but frankly im not worried about these things burning out.

Ive seen those things before and yes youre right the performance is through the roof but the cost per gigabyte is unaffordable.
Adventurer: I seek knowledge and strength.

Seer:Knowledge comes from experience.....Strength comes from battleaxes.

User avatar
Crash
Posts: 453
Joined: Fri 09 Dec 2005, 06:34
Location: Melbourne, FL

#4 Post by Crash »

You're right, I've never seen a flash device actually die, so it's probably pretty safe. The Gigabyte product is kind of expensive, like $100, but it uses what is now becoming obsolete and cheap RAM. Eventually we will be garbage picking the DDR RAM for free out of old dead computers. Personally, I'm happy with the performance of my $10 USB thumb drive too.

Anyway, I don't see any reason in principal why the CF RAID won't work. I use a CF card with an IDE adapter and it works just fine.

User avatar
nipper
Posts: 150
Joined: Sat 22 Mar 2008, 16:08

Re: Bunch of cf cards in raid 0????

#5 Post by nipper »

Runemaster wrote:Ok it is my perceived plan to get 4 either 8 or 16gb cf cards BUT they need to be at least 233x or higher and get 4 cf to sata adapters and run them in software raid 0 (since i cant afford a true raid card with its own dedicated processor and cache and there would be no point in buying just a standard sata controller if ones already built into the motherboard).

questions, suggestions, comments or concerns???
Other than what has already been mentioned, it isn't too easy to make suggestions or express concerns when you don't state your intended goal. If you just want to try it, that's one thing. If you're trying to achieve a speed increase of some kind, I'm not certain that striped flash memory will be any faster than just the CF cards without the raid (and there will be some overhead with the raid). Striped disks are faster because the individual hard drives are much slower at reading and writing than the CPU can I/O, so moving to the next one in line is faster than waiting on the drive to be ready for the next action. However, flash memory is ready to read and write again faster than a hard drive so there may not be any actual performance increase with raid in this configuration. Should be quieter though and maybe even use less power.

I don't know about sata adapters but there are CF-->ide adapters that can take two CF cards (similar to two HDDs on one cable), maybe you can find sata ones too at your supplier.

I think the general consensus is that the less-expensive hardware raid isn't better than software raid anyway. And hardware raid can cause problems if the raid card itself fails and you can't find another of the same hardware (this may no longer be true but I have seen reports that a different raid card didn't see the drives after a failure and repair).

This could turn out to be an expensive experiment, if you use those fast, big, CF cards. I've only seen those 16GB [edit to correct typo] cards at around $200+ USD.

In any case, it's an interesting idea, I will be watching with interest for your success and report. Good luck.

User avatar
Runemaster
Posts: 180
Joined: Sat 05 Aug 2006, 04:41
Location: Albany, GA U.S.

#6 Post by Runemaster »

Thanks. I know that the CF cards have the latency issue down pat but in terms of read/write speed they are much slower than a traditional hdd. A mechanical hdd can transfer from buffer to disk at about 120MB/s. CF only does so at MAX 45MB/s so im hoping that by putting four cf cards in a raid 0 array that will make up for the read/write speeds. So the whole point of this is to still have the read/write speeds of that of a mechanical hdd but at a lower latency of about 0.1 ms but i want to come out on top money wise verses going and buying an ssd.
Adventurer: I seek knowledge and strength.

Seer:Knowledge comes from experience.....Strength comes from battleaxes.

User avatar
Aitch
Posts: 6518
Joined: Wed 04 Apr 2007, 15:57
Location: Chatham, Kent, UK

#7 Post by Aitch »

Hi runemaster
I've been looking into this for some time but have not yet succeeded in speed requirements with cf cards, even with 'hi speed cards'
I am just about to try with 8gig cf microdrives, which are cheaper, and quicker as far as I can see
I remember seeing an article a while back but can't locate it, about someone using a usb hub with 4 cf cards in raid, as a speedy swap drive, but I think latency was the problem there, too
I don't think the burn out issue is an issue any more as the new generation of CF cards are optimized for random writes
I don't have sata so I'll be interested in your results, as I was comtemplating getting a server raid card off ebay, if I can find a 32bit one that is, for the right price

good luck

Aitch

terryaaa
Posts: 2
Joined: Wed 16 Apr 2008, 14:00

Solid State

#8 Post by terryaaa »

Runemaster your idea, I predict, is on the minds of thousands; As posts to this subject are beginning to appear. After about a month of research, I have established that this is complicated ,but I am seeing light at the end of the tunnel. I wish there was a more formal venue to tackle this. I will try to (remember) tell you what I know so far.Maybe we can attract the more intellegent life forms (than me) to your Post.
The first thing you need to know is that Linux software raid is impressive stuff. It is said to outperform most of the big bucks, hardware cards out there in PC applications. And you can throw just about any mixed bag of storage devices at it; it doesn't discriminate. You need to read Simon K aczor's artical at Linux.com " http://www.linux.com/feature/124256
Hear you will be shocked to find out that for "reads", (which is the prime objective for fast booting and program loading) Raid 1 outperforms Raid 0 in most cases ! This is because of Linux raids ability to multi-thread; and pull data from both raid 1 mirrors at the same time.
I think we all agree, a small solid state raid configuration 8GB or even 4GB for "puppy" would handle most all our programs. The issue is speed.
First up; USB - In addition to the latency, which at times is worse than harddrives, the total USB bus maxes out at just over 32MB/ sec. OK for older ,slower memory; but not up to the task for the 200x plus, stuff(30 MB/s plus CF's) we are all now eyeing.
So what about all those IDE, SATA and even extra card slots on our computers?
With out getting in to to mutch debate on this; these I/O's are capable of 60 to 70 MB/s real world throughput. The only rule I know of so far ,is only one drive per IDE port. Two drives on an IDE port (master-slave) is a no no.
With CF to IDE and CF to SATA available for $1 and $5 , things are looking good.
then comes my latest hurtle. There is a throughput problem with these Adapters
They say UDMA compatable and so you would think they meet UDMA-4 specs., which afford aprox. a 65 MB/s throughput, and 50 MB/s actually proven in testing that i've seen. EBays largest vender of such cards returned an answer to a perspective customer saying 20 MB/s was the max. ! And these guys don't usually shoot themselves in the foot! I'm baffled because CF is based on the ATA interface and in my mind is little more than a data through device ?The new Addonics four CF raid card (quad) maxes at 40 MB/s, but then it is on asingle PCI bus (not a PCI-X)
To me these IDE and SATA ports are the way to go, but the adapters are killing the party, We need to find proper UDMA-4 or even -5 adapters. Unless maybe the EBAY vender did shoot himself in the foot and someone out there has tested differently.
If we could resolve this issue; imagine this: Four 2GB CF's on four I/O's (IDE and/or SATA), with each CF partitioned to two or even four drives. Thus running Raid 0+1 or Raid 10 on 8 or 16 drives! With a throughput of 50 MB/s per port , the speed should fall between 150 to 200 MB/s !?!
A final note; How do you supose these $2000 SSD guys are getting there (120 MB/s)
I suspect , beyond solving the I/O issue; Linux software raid and some Linux partitioning hold all the answers.

Terry.

User avatar
Aitch
Posts: 6518
Joined: Wed 04 Apr 2007, 15:57
Location: Chatham, Kent, UK

#9 Post by Aitch »

@terryaaa

For real speed [faster than sata, in practise] - try 4/5x 9Gb 15000rpm Ultra3 scsi drives in raid 5 [noisy though!]

Other than that:-

http://linuxhelp.blogspot.com/2004/12/b ... mance.html

I don't know if HDParm is usable in puppy, but it seems to me that 'performance tuning' puppylinux, is actually everyone's dream, though not spelled out as such, since we all want to keep/improve its performance advantages over other OSs

Anyone know if Bonnie will run in puppy?

http://www.coker.com.au/bonnie++/

Perhaps some of the 'knowledgeables' out there could do some articles on "hardware vs process tuning in puppylinux"

http://www.linuxforums.org/desktop/linu ... uning.html

http://www.ss64.com/bash/ulimit.html

Next question:-
Dual/quad/more processor support with full multitasking? with/without buffering?

Ask BarryK?

Aitch :)

Post Reply