Windows 98 large file-count tests on large volume (500 gb hard drive)

  • Thread starter Thread starter 98 Guy
  • Start date Start date
9

98 Guy

Guest
File copy test - Windows 98

-------------------------
Hardware Details:

Motherboard: ASRock Dual VSTA:
http://www.asrock.com/mb/overview.asp?Model=775Dual-vsta
CPU: Intel Celeron 3.46 ghz
Chipset: VIA PT880 Pro/Ultra Chipset
Driver download (VIA Hyperion Pro Driver Package):
http://www.viaarena.com/Driver/VIA_HyperionPro_V512A.zip
Onboard lan: Via Rhine II / Lan driver: fetnd5av.sys
http://www.viaarena.com/Driver/VT6107_VT8231_VT8233_VT8235_VT8237_VT8251v44FVIA.zip
Installed memory: 512 mb, DDR
USB 2.0 Root Hub (driver: usbhub20.sys)
VIA PCI to USB Enhanced Host Controller (driver: usbehci.sys)
http://www.viaarena.com/Driver/VIA_USB2_V270p1-L-M.zip

Hard drive: Western Digital WD5000KS (500 gb) SATA
Hard drive is controlled by on-board VIA VT8237A Raid controller
(viamraid.mpd, ios.vxd, viamvsd.vxd)
-------------------------

Windows-98se CD was copied to it's own directory on the hard drive,
and all cab files were unpacked into their own separate subdirectory.
In addition to the unpacked cabs, I copied all files in my win-98
system and system32 directories. So this sub-directory has 2000 files
(129 mb). The over-all size of the win-CD directory is therefore 767
mb (5565 files, 366 folders).

I replicated that directory 541 times in a tree as follows:

c:\file test (root test directory)

\Super-1
\Super-2
\Super-3

In each of the above three directories, 10 subdirectories (0001
through 0010). In each of those 10 directories, 18 subdirectories
(000A through 000R). In each of those, a copy of the above-described
win98-CD source files.

The file-properties dialog box for c:\file test takes 10 to 15 minutes
to arrive at a final tally, which is:

Size: 405 gb (435,633,783 bytes) 441,899,741,184
Contains: 3,010,665 Files, 199,119 Folder

Screen capture of file-properties dialog box:

http://www.fileden.com/files/2007/5/26/1113604/file-test.jpg

Based on the size (135 gb) and time-stamps of the Super-x directories,
I calculated that the file-copy rate was effectively 11.5 mb per
second (it took 3.5 hours to copy the contents of Super-1 to Super-2).

chkdsk c:

487,431,968 kilobytes total disk space
52,323,392 kilobytes free

4096 bytes in each allocation unit
121,857,992 total allocation units on disk
13,080,848 available allocation units on disk

I re-started the computer in DOS and ran DOS-scandisk. I left it
running, will check back in a few hours to see how it finished.

Conclusion / Comments:

Well, basically, I almost filled a 500 gb hard drive with a replicated
set of files that range in size from a few bytes to a few mb in size.
A grand total of over 3 million files spread across almost 200,000
directories. Windows was functional during and after this file-copy
process, and the system continues to boot and function normally.

If anyone out there is not satisfied that my test methodology was not
sufficient to correctly test win-98 for a file-count limitation or a
directory-size limitation that may arise given current modern large
hard drives available today, please speak up and describe an alternate
test method.

As a comment, I don't believe that creating a set of zero-byte files
will necessarily accomplish or test windows-98 with the same level of
"stress" as the test I describe here.
 
Re: Windows 98 large file-count tests on large volume (500 gb hard drive)


"98 Guy" <98@Guy.com> wrote in message news:46A536AD.5236DD95@Guy.com...
> File copy test - Windows 98
>
> -------------------------
> Hardware Details:


8<--------------------------------------------
>


> Size: 405 gb (435,633,783 bytes) 441,899,741,184
> Contains: 3,010,665 Files, 199,119 Folder
>
> Screen capture of file-properties dialog box:
>
> http://www.fileden.com/files/2007/5/26/1113604/file-test.jpg
>
> Based on the size (135 gb) and time-stamps of the Super-x directories,
> I calculated that the file-copy rate was effectively 11.5 mb per
> second (it took 3.5 hours to copy the contents of Super-1 to Super-2).
>
> chkdsk c:
>
> 487,431,968 kilobytes total disk space
> 52,323,392 kilobytes free
>
> 4096 bytes in each allocation unit
> 121,857,992 total allocation units on disk
> 13,080,848 available allocation units on disk
>


Something here does not make sense to me.
Here is a clip from your post in response to one of mine a few weeks ago.

...............................
For volumes larger than 64 gb, the cluster size remains at 32kb, but
the cluster count is allowed to exceed 2 million. Microsoft has
stated that the max cluster count is 4.177 million, which would result
in a volume size of 137 gb given a cluster size of 32 kb, which is
another way of arriving at the technical limitation of ESDI_506.PDR.
...............................

Are you using some third party vfat driver?
Or some other formatting program?

8<----------------------

> If anyone out there is not satisfied that my test methodology was not
> sufficient to correctly test win-98 for a file-count limitation or a
> directory-size limitation that may arise given current modern large
> hard drives available today, please speak up and describe an alternate
> test method.


I don't think there is a specific directory size (number of entries)
limitation, except in the root directory.
The directory entries would not know the total file size within the
directory - that must be computed.

I think the methodology is reasonable, but I have two concerns.
First, we know that windoze places files somewhat arbitrarily on the hard
drive, although fat32 is more 'front to back' then ntfs.
I would like to see a scandisk map (possibly using norton defrag, not
practical using scandisk) to show that the 'back' of the hard drive is
empty, and some proof that scandisk can read and write those sectors.
A reboot in between would be a good idea, since windows caches a bunch of
file data as it creates files and folders, which it may not be able to find
after a reboot.

As I recall, the problem is not in creating the files, it is in using them
Second, I would like to see some files written to the back of the hard
drive and successfully read, updated and re-read.

It would be interesting to see the comparison of file size actually written
to file space taken up.

>
> As a comment, I don't believe that creating a set of zero-byte files
> will necessarily accomplish or test windows-98 with the same level of
> "stress" as the test I describe here.


I agree with that.

Some of this of academic interest only, as win98 and fat32 have so many
other limitations.

For the record, I had to switch to xp because
a) a few programs I need are not available in win98, and
b) I have video files which exceed the 2 gig/4 gig fat32 limit.

I still run 98 one some of the machines here, as there are some programs
which will not run properly in xp

Stuart
 
Re: Windows 98 large file-count tests on large volume (500 gb harddrive)

Re: Windows 98 large file-count tests on large volume (500 gb harddrive)

Stuart Miller wrote:

> Something here does not make sense to me. Here is a clip
> from your post in response to one of mine a few weeks ago.
>
> ..............................
> For volumes larger than 64 gb, the cluster size remains at 32kb,
> but the cluster count is allowed to exceed 2 million. (...)
> ..............................
>
> Are you using some third party vfat driver?
> Or some other formatting program?


The drive in question was formatted with Western Digital "Data
Lifeguard Tools" version 11.2 for DOS:

http://support.wdc.com/download/downloadxml.asp#53
http://websupport.wdc.com/rd.asp?p=...://support.wdc.com/download/dlg/DLG_V11_2.zip

It creates a bootable floppy with drive-formatting software (I believe
it's some version of OnTrack's Disk Manager software). It allows for
the quick partioning and formatting of WD drives. For FAT-32, it
allows the user to choose the cluster size, from 512 bytes up to 32
kb.

> I don't think there is a specific directory size (number of
> entries) limitation, except in the root directory.


Actually, I think that FAT and FAT-16 had a limit of something like
512 entries in the root directory (I remember some win-95 systems that
didn't work properly when the number of files in the root directory
reached 512). I believe I read somewhere that FAT-32 does not
allocate a fixed size for the directory hence there is no practical
limitation as to how many entries the directory (root or otherwise)
can contain.

> I think the methodology is reasonable, but I have two concerns.
> First, we know that windoze places files somewhat arbitrarily
> on the hard drive, although fat32 is more 'front to back' then
> ntfs.
> I would like to see a scandisk map (possibly using norton
> defrag, not practical using scandisk) to show that the 'back'
> of the hard drive is empty, and some proof that scandisk
> can read and write those sectors.


I'm not sure I understand what you're trying to determine.

Since I've blown past the 137 gb barrier by filling a 500 gb drive
with 400 gb of material, does it matter *how* the drive is filled
(either physically or logically) ?

What is the significance of the "back" end of the hard drive, and
whether it is used or empty?

We know that I started with 121 million clusters, and I've used
slightly over 100 million of them in this test. The back-end is
pretty small at this point.

> As I recall, the problem is not in creating the files,
> it is in using them Second, I would like to see some
> files written to the back of the hard drive and
> successfully read, updated and re-read.


Since I have 540 replicated sets of files, would a series of random
file-comparisons made on those sets suffice to show that win-98 is
able to retrieve the files and perform a byte-level comparison on
them? Would such a test demonstrate the integrity of the file system
as well as win-98's ability to work with it?
 
Re: Windows 98 large file-count tests on large volume (500 gb hard drive)

Re: Windows 98 large file-count tests on large volume (500 gb hard drive)


"98 Guy" <98@Guy.com> wrote in message news:46A679DD.1B849E34@Guy.com...
> Stuart Miller wrote:
>
>> Something here does not make sense to me. Here is a clip
>> from your post in response to one of mine a few weeks ago.
>>
>> ..............................
>> For volumes larger than 64 gb, the cluster size remains at 32kb,
>> but the cluster count is allowed to exceed 2 million. (...)
>> ..............................
>>
>> Are you using some third party vfat driver?
>> Or some other formatting program?

>
> The drive in question was formatted with Western Digital "Data
> Lifeguard Tools" version 11.2 for DOS:
>
> http://support.wdc.com/download/downloadxml.asp#53
> http://websupport.wdc.com/rd.asp?p=...://support.wdc.com/download/dlg/DLG_V11_2.zip
>
> It creates a bootable floppy with drive-formatting software (I believe
> it's some version of OnTrack's Disk Manager software). It allows for
> the quick partioning and formatting of WD drives. For FAT-32, it
> allows the user to choose the cluster size, from 512 bytes up to 32
> kb.
>

Thank you for that info. You are bypassing some of the windows disk
management routines, so it would be natural to expect better results and
fewer limitations. (even when ms-dos first came out, there were file systems
in use which were far superior to fat-16, but I'll skip the rant about such
things)
We did this a number of times over the years, with varios bios and ms-dos
limitations.
I don't remember the specific limits, but I recall 1 gig hard drives were a
problem in dos.

Question - what does this do with the 2gig/4gig file size limit?
I use both numbers, because fat-32 can not create a file bigger than 4 gigs,
but it can not copy files between 2 and 4 gigs.


>> I don't think there is a specific directory size (number of
>> entries) limitation, except in the root directory.

>
> Actually, I think that FAT and FAT-16 had a limit of something like
> 512 entries in the root directory (I remember some win-95 systems that
> didn't work properly when the number of files in the root directory
> reached 512).


This is a ms-dos restriction, and applies to all fat-12 and fat-16 systems.
ms-dos (which is win 95 & 98) would not create any more entries after a
specific number.

>I believe I read somewhere that FAT-32 does not
> allocate a fixed size for the directory hence there is no practical
> limitation as to how many entries the directory (root or otherwise)
> can contain.
>


>> I think the methodology is reasonable, but I have two concerns.
>> First, we know that windoze places files somewhat arbitrarily
>> on the hard drive, although fat32 is more 'front to back' then
>> ntfs.
>> I would like to see a scandisk map (possibly using norton
>> defrag, not practical using scandisk) to show that the 'back'
>> of the hard drive is empty, and some proof that scandisk
>> can read and write those sectors.

>
> I'm not sure I understand what you're trying to determine.


I recall some problem with partitions above a certain size, where windows
would create the files in the 'back' of the partition (after a certain byte
count) , but then be unable to read them, or be unable to defrag them. This
was related to bios settings and windows limits, but I think you may have
bypassed that problem

>
> Since I've blown past the 137 gb barrier by filling a 500 gb drive
> with 400 gb of material, does it matter *how* the drive is filled
> (either physically or logically) ?


Not really, now that I know how you did it.

>
> What is the significance of the "back" end of the hard drive, and
> whether it is used or empty?
>

as above.


> We know that I started with 121 million clusters, and I've used
> slightly over 100 million of them in this test. The back-end is
> pretty small at this point.
>
>> As I recall, the problem is not in creating the files,
>> it is in using them Second, I would like to see some
>> files written to the back of the hard drive and
>> successfully read, updated and re-read.

>
> Since I have 540 replicated sets of files, would a series of random
> file-comparisons made on those sets suffice to show that win-98 is
> able to retrieve the files and perform a byte-level comparison on
> them? Would such a test demonstrate the integrity of the file system
> as well as win-98's ability to work with it?


Comparisons of the written files only proves that both were written
correctly. I am concerned about the ability to randomly update files past
the usual limits. Maybe 'randomly update' is a poor choice of words, as
files are not updated in place - a new file is written, then the old one
'erased'. But I am sure you understand what I mean here.

hmmm put the windows registry or swap file way back there and see what
happens.

I'm very interested in this, but I know I won't ever use it. I have 200 and
300 gig drives on my file server, which runs linux.

Stuart
 
Re: Windows 98 large file-count tests on large volume (500 gb harddrive)

Re: Windows 98 large file-count tests on large volume (500 gb harddrive)

Stuart Miller wrote:

> Thank you for that info. You are bypassing some of the windows
> disk management routines, so it would be natural to expect
> better results and fewer limitations.


Actually, in some of my previous posts, I've detailed how the
conventional win-98 versions of fdisk and format.com (the "updated"
format.com) are capable of preparing a 250 gb drive with fat-32.
Granted, you can't specify the cluster size with format, but still
those 2 programs work on drives up to at least 250 gb.

> I don't remember the specific limits, but I recall 1 gig hard
> drives were a problem in dos.


Over the years, there have been a number of file system limitations as
fat went from fat to fat-16 to fat-32, and as motherboard bios
parameters have changed to reflect increasing drive capacity.

It's not really correct to pin the limitation on DOS when the
file-system specifications are the issue.

As it stands, the "DOS" that comes with win-98 is fully compatible
with any hard drive you can hang off a given motherboard because DOS
uses system bios calls (enhanced int13). When it comes to IDE (PATA)
drives, win-98 is handicapped by it's protected-mode driver
(ESDI_506.pdr) which limits it to drives no larger than 128 gb.

> Question - what does this do with the 2gig/4gig file size
> limit? I use both numbers, because fat-32 can not create
> a file bigger than 4 gigs, but it can not copy files
> between 2 and 4 gigs.


Last year I built a win-XP system for a relative. I loaded it with
all sorts of multi-media, video-editing software. The hard drive was
a 250 gb SATA. And I prepared it with the previously-mentioned WD
software, as a single FAT-32 volume with 4kb cluster size. What I
found is that video-capture software seemlessly spanned the 4 gb
file-size limitation by breaking up the files. I personally prefer
FAT32 over NTFS for a number of reasons, but that argument is for
another thread.

> > Actually, I think that FAT and FAT-16 had a limit of
> > something like 512 entries in the root directory

>
> This is a ms-dos restriction, and applies to all fat-12
> and fat-16 systems.


Again, I see that as a limitation or shortcoming of the fat-16 spec -
not of DOS.

> ms-dos (which is win 95 & 98) would not create any more
> entries after a specific number.


Technically, I think that win-98 FE (first edition) came with FAT-32
capability.

> Comparisons of the written files only proves that both were
> written correctly. I am concerned about the ability to
> randomly update files past the usual limits. Maybe 'randomly
> update' is a poor choice of words, as files are not updated
> in place - a new file is written, then the old one
> 'erased'. But I am sure you understand what I mean
> here.


So basically the task is to write a program that can open a file for
random access and start performing reads and writes to it. The file
would not "move around" the drive in this case, but presumably would
occupy the same physical/logical sectors. Or alternatively, I could
create a text file, close it and open it and edit it, and keep
repeating the process.

> hmmm put the windows registry or swap file way back
> there and see what happens.


I'm going to have to check something with that system - it seems to
not be using virtual memory according to the performance tab in
System Properties.

-----------

Just a bit of an update regarding DOS scandisk and my 500 gb drive.

After approx. 24 hours, scandisk is still operating on the drive.

Within the first few minutes (perhaps the first 15 - 30 minutes) it
checked the Media Descriptor and the File Allocation Tables. What has
taken the majority of time so far is the check of the Directory
Structure. It is currently at the 24% point of that check.

I'll let it go over night and see where it's at in the morning.
 
Re: Windows 98 large file-count tests on large volume (500 gb hard drive)

On Mon, 23 Jul 2007 19:15:57 -0400, 98 Guy <98@Guy.com> wrote:

>File copy test - Windows 98
>
>Conclusion / Comments:
>
>Well, basically, I almost filled a 500 gb hard drive with a replicated
>set of files that range in size from a few bytes to a few mb in size.
>A grand total of over 3 million files spread across almost 200,000
>directories. Windows was functional during and after this file-copy
>process, and the system continues to boot and function normally.
>
>If anyone out there is not satisfied that my test methodology was not
>sufficient to correctly test win-98 for a file-count limitation or a
>directory-size limitation that may arise given current modern large
>hard drives available today, please speak up and describe an alternate
>test method.
>
>As a comment, I don't believe that creating a set of zero-byte files
>will necessarily accomplish or test windows-98 with the same level of
>"stress" as the test I describe here.


Time for a second opinion on this thread.
98Guy I think you have went out of your way to prove/disprove many
items about the 98 FS. Any more would be just a waste of time an
effort on your part. No matter what you do you will always find the
person or persons who will doubt the validity of what you have done or
the methodology used. They will offer alternate methods but will never
got to the lengths you have to prove or disprove a point.
Just tell them to F**K off and do their own testing or ignore
everything/anything you have done.

Art

PS I have always been told the problem of large number of clusters in
98 was due to the fact that on boot the FAT Table was read into memory
and would use up all available memory just to hold the FAT Table. If
this were true it seems that with your 500G test all available memory
would be used and there would be nothing left for programs. It also
seems that your boot times would be in minutes not seconds just to
read the FAT Table.
 
Re: Windows 98 large file-count tests on large volume (500 gb harddrive)

Re: Windows 98 large file-count tests on large volume (500 gb harddrive)

Ok, there seems to be a problem with enabling virtual memory.

From the System Properties, Performance tab, I am told that virtual
memory is not enabled. When I bring up the virtual memory dialog box,
the radio-button "let windows manage my virtual memory settings" is
selected, and the following information is shown in grey:

Hard Disk: c:\ -14440 MB Free
Minumum: 0
Maximum: no maximum

When I select the radio button "Let me specify my own virtual memory
settings" those settings change to this:

Hard Disk: c:\-14440 MB Free
Minimum: 0
Maximum: 51096

I changed the maximum to 512 (I assume that's mega-bytes) and
restarted. Virtual memory was still showing as being disabled. I set
both the min and max to be 512 and restarted again. It still said
that virtual memory was disabled, but this time the Hard Disk value
had changed to -13928 MB Free (a difference of 512). I changed both
to 128 and still virtual memory was still disabled.

Prior to each change, I looked for win386.swp in the root directory,
but it was never there (even when I tried to unhide it using attrib).

So something wierd is going on with the swap file and virtual memory.
Might have to resort to messing with registry or .ini settings to see
if I can get it going. Any known issues with Win-98 showing a
negative number for hard drive space or otherwise for refusing to
enable the swap file?
 
Re: Windows 98 large file-count tests on large volume (500 gb harddrive)

Re: Windows 98 large file-count tests on large volume (500 gb harddrive)

Star@*.* wrote:

> They will offer alternate methods but will never got to the
> lengths you have to prove or disprove a point.


It would be good to have someone else replicate what I've done. I
can't be the only one with a bunch of new motherboards, hard drives,
cpu's and memory sitting around... :)

> Just tell them to F**K off and do their own testing or ignore
> everything/anything you have done.


Doing something along those lines has crossed my mind. Recently.

> PS I have always been told the problem of large number of
> clusters in 98 was due to the fact that on boot the FAT
> Table was read into memory and would use up all available
> memory just to hold the FAT Table.


That argument was offered back last February when I first tried
running win-98 on fat-32 volumes with large cluster counts.

I countered by pointing out that by Microsoft's own reasoning, a
volume was never allowed to have more than 4.177 million clusters
because that was the largest number of clusters that DOS scandisk
could process given a supposed 16 mb array size limitation. They
mentioned nothing about windows needing to load the entire FAT table
during normal use. And besides, given Win98's specified minimum
requirements (16 mb ram), you'd have a situation where a good chunk of
that would have been consumed by the FAT.

I've since discovered that DOS scandisk has no such 16 mb memory
limitation. Or perhaps it does, but it doesn't effect it's ability to
process a FAT with more than 4 million clusters. I think that the
only time the entire FAT table is read into system memory is during
disk maintainence like Windows scandisk and defrag.

If it's true that you need 4 bytes per cluster to read in the FAT
table, then maybe if I put more memory into the system the windows-ME
versions of scandisk and defrag would work. I think I'll try that.

> If this were true it seems that with your 500G test all
> available memory would be used and there would be nothing
> left for programs.


Yes, that would have to be the case given 121 million clusters in my
situation (with 512 mb installed memory).

> It also seems that your boot times would be in minutes not
> seconds just to read the FAT Table.


The system boots fast, certainly within 1 minute. I haven't timed it
yet.
 
Re: Windows 98 large file-count tests on large volume (500 gb hard drive)

Re: Windows 98 large file-count tests on large volume (500 gb hard drive)

On Wed, 25 Jul 2007 00:30:01 -0400, 98 Guy <98@Guy.com> put finger to
keyboard and composed:

>Any known issues with Win-98 showing a
>negative number for hard drive space or otherwise for refusing to
>enable the swap file?


Is this it?

Negative Hard Disk Free Size Reported on Virtual Memory Tab in System
Properties:
http://support.microsoft.com/kb/272620

====================================================================
SYMPTOMS
When you view the Virtual Memory tab in System properties, the hard
disk free size is reported as a negative number if your hard disk has
more than 32 gigabytes (GB) of free space. If you use the arrows (the
spinner controls) to change the values in the Minimum and Maximum size
boxes for the paging file, negative numbers are also displayed.

WORKAROUND
You can ignore the incorrectly listed free space because Windows
internally interprets the numbers correctly as large positive numbers.

The English version of this fix should have the following file
attributes or later:

Date Time Version Size File name Operating system
-------------------------------------------------------------------
09/12/2000 02:31p 4.10.2224 384,144 Sysdm.cpl Windows 98
Second Edition

To resolve this problem, contact Microsoft Product Support Services to
obtain the hotfix.
====================================================================

- Franc Zabkar
--
Please remove one 'i' from my address when replying by email.
 
Re: Windows 98 large file-count tests on large volume (500 gb hard drive)

Re: Windows 98 large file-count tests on large volume (500 gb hard drive)


"98 Guy" <98@Guy.com> wrote in message news:46A6D1C9.AEA4BBAB@Guy.com...
> Ok, there seems to be a problem with enabling virtual memory.
>
> From the System Properties, Performance tab, I am told that virtual
> memory is not enabled. When I bring up the virtual memory dialog box,
> the radio-button "let windows manage my virtual memory settings" is
> selected, and the following information is shown in grey:
>
> Hard Disk: c:\ -14440 MB Free
> Minumum: 0
> Maximum: no maximum
>
> When I select the radio button "Let me specify my own virtual memory
> settings" those settings change to this:
>
> Hard Disk: c:\-14440 MB Free
> Minimum: 0
> Maximum: 51096
>
> I changed the maximum to 512 (I assume that's mega-bytes) and
> restarted. Virtual memory was still showing as being disabled. I set
> both the min and max to be 512 and restarted again. It still said
> that virtual memory was disabled, but this time the Hard Disk value
> had changed to -13928 MB Free (a difference of 512). I changed both
> to 128 and still virtual memory was still disabled.
>


Why am I not surprised?

> Prior to each change, I looked for win386.swp in the root directory,
> but it was never there (even when I tried to unhide it using attrib).


It can easily be found using other utilities.
I still have my DR-dos (now caldera) floppies, and their versions or xdir do
a lot more than the MS ones. I can send these (separately from the whole
install) if you want.
Caldera dos is used by Maxtor and WD in their hard drive diagnostic boot
floppies.

>
> So something wierd is going on with the swap file and virtual memory.
> Might have to resort to messing with registry or .ini settings to see
> if I can get it going. Any known issues with Win-98 showing a
> negative number for hard drive space or otherwise for refusing to
> enable the swap file?


This is an old issue from lazy or sloppy programmers.
Number is defined or displayed as signed integer ( -32k to +32k) vs unsigned
integer (0 to 64k), and some simple overflow problems. Same logic applies to
'double precision' or 'long' integers.

I got the same thing from dos and win3.1 programs when large hard drives
becane common.

Stuart
 
Re: Windows 98 large file-count tests on large volume (500 gb harddrive)

Re: Windows 98 large file-count tests on large volume (500 gb harddrive)

Franc Zabkar wrote:

> Is this it?
>
> Negative Hard Disk Free Size Reported on Virtual Memory Tab in
> System Properties: http://support.microsoft.com/kb/272620


As reported here:

http://support.microsoft.com/kb/272620

I obtained the updated file from the win-98 service-pack thing
(unpacked it manually) and replaced my existing sysdm.cpl. While it
did correct the display of a negative free size on the hard drive, it
did not solve the virtual memory issue.

I then connected another SATA drive to the system (160 gb, with a
single 25 gb FAT-32 partition, formatted with 4 kb clusters, a little
over 6 million clusters) and Win-98 DID enable virtual memory when
instructed to put the swap file on the new drive.

So for some reason win-98 did not want to locate the swap file on the
500 gb drive. Either it did not like the fact that the drive was
formatted with 4kb cluster size (resulting in 121 million clusters) or
it didn't like where on the drive it would have to put it (at the back
10% of the drive).

Also -

I increased the amount of installed memory to 1 gb, and still got
"insufficient memory" when running Windows Scandisk and Defrag on the
500 gb drive.

DOS scandisk does not give an error, but it would have taken 4 days to
run (given it was at the 30% point after 30 hours).
 
Re: Windows 98 large file-count tests on large volume (500 gb harddrive)

Re: Windows 98 large file-count tests on large volume (500 gb harddrive)

Update:

I reported previously that win-98 didn't like creating/putting the
swap file on the original 500 gb primary drive, but it was ok with a
25 gb partition on a secondary (d:) drive.

I swapped the seconary 25 gb drive with a fresh 500 gb drive
(formatted as a single fat-32 partition, 32kb cluster size, 15 million
total clusters). Win-98 was ok with putting the swap file on it.

I then brought the system memory up to 2 gb, but I get an
"insufficient memory to initilize windows" message early in the
startup process. I then set MaxPhysPage=50000 in system.ini, but
system just reboots. Set it to 40000 and it booted. Set it to 40010
and still seems to only show 1022 mb total memory.

Has anyone gotten win-98 to run with more than 1 gb memory? How?
 
Re: Windows 98 large file-count tests on large volume (500 gb hard drive)

98 Guy <98@Guy.com> wrote:

>File copy test - Windows 98
>


<snip>

>
>chkdsk c:
>
> 487,431,968 kilobytes total disk space
> 52,323,392 kilobytes free
>
> 4096 bytes in each allocation unit
> 121,857,992 total allocation units on disk
> 13,080,848 available allocation units on disk
>


The FAT32 implementation provides for 28 bits to be used for cluster
numbers, which means that it is possible to have up to 268,435,445
total clusters on a drive.

Smaller limitations are the result of the tools (FDISK & Format for
example) normally used to create FAT32 drives.

Ron Martell Duncan B.C. Canada
--
Microsoft MVP (1997 - 2008)
On-Line Help Computer Service
http://onlinehelp.bc.ca

"Anyone who thinks that they are too small to make a difference
has never been in bed with a mosquito."
 
Re: Windows 98 large file-count tests on large volume (500 gb harddrive)

Re: Windows 98 large file-count tests on large volume (500 gb harddrive)

Ron Martell wrote:

> The FAT32 implementation provides for 28 bits to be used for
> cluster numbers, which means that it is possible to have up
> to 268,435,445 total clusters on a drive.
>
> Smaller limitations are the result of the tools (FDISK & Format
> for example) normally used to create FAT32 drives.


There is little to no information regarding Win-98's compatibility or
operational stability with volumes with large cluster-counts, and just
what gets broken at what cluster-count.

The year-2000 update to fdisk does work for 250 gb drives (I haven't
tried it on a 500 gb drive). Format also works on 250 gb drives, but
because of it's use of 32kb cluster size there will be 7.6 million
clusters on a 250 gb drive.

I am not sure if the native win-98 versions of scandskw.exe,
dskmaint.dll and defrag.exe will operate on a volume that exceeds 4.17
million clusters but the win-me versions will, at least up to 31
million clusters. The windows ME versions did not function on a
volume with 121 million clusters, displaying an "insufficient memory"
message (even on a system with 1 gb memory).

The MS-DOS version of scandisk.exe does not seem to have a
cluster-count limitation and has been seen to run without issue even
on a 500 gb drive with 121 million clusters (although it was only
allowed to run for 30 hours before being terminated - it was projected
that it would have taken 3 more days to complete it's scan).

It has been speculated that the number of clusters is limited because
win-98 loads the entire FAT table into memory during normal
operational use, but given the recent test with a 121-million cluster
drive that theory appears to be wrong.

The only issue so far with win-98 installed on a 500 gb volume with
121 million clusters is that it will not create or place the swap file
on it, hence virtual memory will not be enabled. It will create /
place the swap file on a secondary drive, even if that drive is
another 500 gb drive (but formatted with 32kb cluster size resulting
in 15 million clusters).
 
Re: Windows 98 large file-count tests on large volume (500 gb harddrive)

Re: Windows 98 large file-count tests on large volume (500 gb harddrive)

98 Guy wrote:
> Ron Martell wrote:
>
>> The FAT32 implementation provides for 28 bits to be used for
>> cluster numbers, which means that it is possible to have up
>> to 268,435,445 total clusters on a drive.
>>
>> Smaller limitations are the result of the tools (FDISK & Format
>> for example) normally used to create FAT32 drives.

>
> There is little to no information regarding Win-98's compatibility or
> operational stability with volumes with large cluster-counts, and just
> what gets broken at what cluster-count.
>
> The year-2000 update to fdisk does work for 250 gb drives (I haven't
> tried it on a 500 gb drive). Format also works on 250 gb drives, but
> because of it's use of 32kb cluster size there will be 7.6 million
> clusters on a 250 gb drive.
>
> I am not sure if the native win-98 versions of scandskw.exe,
> dskmaint.dll and defrag.exe will operate on a volume that exceeds 4.17
> million clusters but the win-me versions will, at least up to 31
> million clusters. The windows ME versions did not function on a
> volume with 121 million clusters, displaying an "insufficient memory"
> message (even on a system with 1 gb memory).
>
> The MS-DOS version of scandisk.exe does not seem to have a
> cluster-count limitation and has been seen to run without issue even
> on a 500 gb drive with 121 million clusters (although it was only
> allowed to run for 30 hours before being terminated - it was projected
> that it would have taken 3 more days to complete it's scan).
>
> It has been speculated that the number of clusters is limited because
> win-98 loads the entire FAT table into memory during normal
> operational use, but given the recent test with a 121-million cluster
> drive that theory appears to be wrong.
>
> The only issue so far with win-98 installed on a 500 gb volume with
> 121 million clusters is that it will not create or place the swap file
> on it, hence virtual memory will not be enabled. It will create /
> place the swap file on a secondary drive, even if that drive is
> another 500 gb drive (but formatted with 32kb cluster size resulting
> in 15 million clusters).

I run W98se updated to the last update available and although I do not
have a very large single drive I do have 5 WD160's on a RockeRaid board
configured in Raid 5 which gives me a 600G volume as seen by W98. I
used Partition Magic 8 to partition that drive into 5 smaller volumes
and have had no problems including a drive failure where I ran on a
broken array for a week until I was able to install a good drive &
rebuild the array. That was about 18 months ago and with the volumes
nears full I have not had any problems as yet. I don't know if the RR
board handles things differently at the drive level but since W98 saw
the full 600G in the beginning I think it should be very similar. Due
to limitations in W98 I have the partitions set to different sizes.
Small for many small files & large for the very large files. If I save
a bunch of small files on a large partition I get as much as 10 times
more disk space consumed as when they are placed on a smaller partition.
This is all due to the cluster allocation being so large for large
partitions. One I just ran in Windows Explorer, a 200G partition, shows
43G of files taking 106G of space. Many small files not yet compressed.
I use RarLabs' Winrar to consolidate older data files into one volume
that will take just over the 43G of space but all of the contents will
still be directly available using Total Commander instead of Windows
Explorer. These files are not available to other apps and I will need
to extract any needed by an app or I can drag and drop it into the app
on occasion rather than extract it. This is mainly for archiving files
not regularly needed but keeping them available if they are needed.

James
 
Re: Windows 98 large file-count tests on large volume (500 gb hard drive)

Re: Windows 98 large file-count tests on large volume (500 gb hard drive)

James wrote:

> I don't know if the RR
> board handles things differently at the drive level but since W98 saw
> the full 600G in the beginning I think it should be very similar.


That's easy to see. Goto Device Manager -> (your IDE controller) Properties
-> Driver -> Driver File Details. When ESDI_506.PDR is used AND this file
is Microsoft's, Windows sees this as a 'normal' disk.
This should not be the case, because then you can address only 128 GiB.
 
Re: Windows 98 large file-count tests on large volume (500 gb harddrive)

Re: Windows 98 large file-count tests on large volume (500 gb harddrive)

Ingeborg wrote:

> That's easy to see. Goto Device Manager -> (your IDE controller)
> Properties -> Driver -> Driver File Details. When ESDI_506.PDR is
> used AND this file is Microsoft's, ...


I'm not positive, but I suspect that Win-98 will load EDSI_506.PDR
because it detects a "Primary IDE controller" so it loads the driver
for it. Since you will usually connect an optical drive to the
primary IDE controller, again I'm not sure if ESDI_506.PDR is used to
"talk" to the optical drive.

But in any case, the presence of ESDI_506.PDR being loaded and/or
being associated with the IDE controller is not an indication by
itself that you will have a problem with a drive larger than 128 gb.
The drive must directly connected to, or mapped to, the IDE controller
for that to be a problem. If the drive is PATA/IDE, then yes, the
odds are very high that ESDI_506.PDR will end up controlling it. If
the drive is SATA, then it's quite easy to arrange it so that
ESDI_506.PDR is NOT used to control it.
 
Back
Top