I am not an Android expert, but I thought I might share a few experiences I have had with running things from RAM under Linux.
The first time I ever tried this, I was actually exporting a disc from Linux to use to boot a Windows XP machine. All I did for this was run a loop containing 'cat /dev/sdb1 &>/dev/null'. Quite simply, it continually read the HDD I was interested in, thereby keeping it in the cache. It resulted in lightning reads (coming straight from RAM), but normal speed writes (as it was still writing directly to disc). This option I found to be the safest, simplest way.
The next method I have used is to modify the 'init' script, copying the required files to a tmpfs and using them for the system. Gives excellent performance (so long as you have enough spare RAM in the system for it), but it does have a major downside (or upside, depending on your usage): Writes go only to RAM, so all changes are lost on a reboot/shutdown/power loss/crash. In my case, this was for a system I was using as a set-top box, so it didn't matter to me (I had scripts I could run to manually sync files if I needed to save changes), but you would need to sync regularly and/or on shutdown, or find a way to hook into system events to know when to sync.
The next way I tried was for a more mainstream system, so the above approach was not good enough. What I did was to start with the method above, then use unionfs so changes were saved to a HDD partition. This worked quite well, but we are back to writes being the same speed as they would otherwise, and changed data must either be accessed from disc or merged into the ramdisk.
Another way I tried, mainly to fit more into RAM than I had available, was a variant on what I believe is known as Casper. Basically, for the filesystem you wish to load into RAM, you create a "squashfs" image (read-only compressed filesystem), load that into ram, mount it, then use unionfs to disk or RAM, depending on whether you want to keep changes (basically as above, but using squashfs to reduce RAM use). The CPU use from decompression is tiny (at least on a desktop, this was on a single core Atom with 2GB RAM for a Mythbuntu box at the time), and is more than compensated for by the speed of access to RAM. This method also speed boot times, as it lakes less time to load the smaller compressed image than it would the full one.
And here's the final method I've tried (god, I've spent far too much time on this, didn't realise till I started writing this). Use the Linux Software RAID driver. Build a RAID1 (mirror) using the disk storage and a file on tmpfs, with the disk set to be write-mostly (and write-behind if you can stomach data loss in a power cut or crash). This will then start off syncing the disk to RAM, reading "misses" from disk. Once this is synced, all reads and writes will go to RAM, but writes will also go to the disk.
I don't know how much can easily be used on a Droid device (as I don't know what modules are available, probably varies between devices), but in this case I would say using SquashFS and UnionFS is probably best (due to limited RAM), but it's something which would need to be experimented with.
Wish you all well with this.