I myself have a soft spot for the StreamWriter and StreamReader, mainly becouse it allows me to write such great amount of data to a file very quickly and then read it also in half that time -- with the benifits of stream.
This is, if im trying to store something in my own format, if your storing settings or some general stuff XML would probably be alot better. XML makes it all cleaner and more portable.
So in general, I myself prefer to store settings in XML-files and then the masses of data my application uses in ordinary text-files with some formating/structure that suits the specific needs of that partiqular data. Its also the easiest and quickest to implement (reading a file to memory takes what, one or two rows of code.. and Im lazy so thats good for me).
_____
Well, I guess it all depends on what exactly you are gonna do. I mean what type of enviroment your code will be living in. Is it a real stand alone application or some web-thingy?.
I only use .net(c#) for Applications and not Web (I prefer PHP for that).
If your doing an application that will be running for a little while, say during multiple modifications of the data. The smartest and most efficiant way would be to load the data at start up. Put it somewhere (some class designed to hold each person or whatever stored in an arraylist,hashtable) in memory and then do your work against the data in the arraylist/hashtable. And _then_ when you want to save or close the app loop all data and output it to the/a file.
The more data the slower it will become to save/load(pretty logical). I guess it might not be an issue if you wont have so much data but who knows how much you will store in a few months.
______________
If you absolutly want to operate directly on the data in a file for whatever reason, the only one I can think of is if its a very shortlived app such as some web-thing, and perhaps the performance hit of loading and writing is something that one might notice. Then I guess your most efficiant course would be to store the data as usuall in some text file. And then we must know where the data is to be able to jump to it and write there directly (without loading everything to memory we cant know). So then youd have to store some sort of index (possibly in another file) that states for example at what line number all the data is..
____data.txt
Worf
Klingon
ET
Alien
Alfred
Human
____data_index.txt
Worf:1
ET:4
Alfred:7
___________________________
So if we read the index file and parse the data(split() comes to mind), and we want to change the race of ET, we know that his definition begins on line 4. And each person definition begins with name followed by race on next row. We know this since we are the one who saved the file in the first place and decided in what order things come. Then we can directly jump to the 5th line and replace it with something else.
Of course, keeping an index of something like this comes with a performance penalty since we must read the index before we can do anything. So you will only benifit performance wise if the main data file contains such large amounts of data that the time it takes to read it all is greater then reading the index file. Becouse the index file will obviously also take a while to load if there are a few thousand "person definitions".
In this example with only name and race the gain we have is.. almost nothing since its so little data. But if you store much data per every "cluster" or "group" -- name & race being a group -- of data then you will gain by keeping that extra index and be able to jump directly somewhere and write.
_______________
For stand alone applications I would greatly prefer just reading it all at start up. Easier and not as much code
, im lazy.
But if your doing web-stuff and you want that extra speed/performance the index-route is probably better.
Now I must go eat something, been coding all day. I hope Ive been of help.