OT. . .

  • Thread starter Thread starter Tony Sperling
  • Start date Start date
Re: OT. . .

According to nVidia, and a few other sources, the fact that
you have 2 graphics cards in a SLI configuration is totally
transparent to the programs; i.e., they don't know, and
don't care, whether there is one, two, or four video cards.
The program is feeding the video information to the
video system and that's all it know. The video software,
along with the hardware, decide how to process the information.

And, that's exactly what I quoted below:

>> Both cards are given the same part of the 3D scene to

render, but
>> effectively half of the work load is sent to the slave

card through a
>> connector called the SLI Bridge. As an example, the

master card works
>> on the top half of the scene while the slave card works

on the bottom
>> half. When the slave card is done, it sends its output

to the master
>> card, which combines the two images to form one and then

outputs the
>> final render to the monitor.



jabloomf1230 wrote:
> Theo,
>
> I'm not sure what your point is? The original question was what happens
> when an SLI-enabled system encounters a program that is not SLI aware.
> Are you saying that in that circumstance, both GPUs are still used by
> the software to render graphics? Or that if only one GPU is use, it can
> use the RAM on the second card for loading textures, etc.?
>
> Jay
>
> Theo wrote:
>> I think you need to research SLI a bit more. Just an extract:
>>
>> Both cards are given the same part of the 3D scene to render, but
>> effectively half of the work load is sent to the slave card through a
>> connector called the SLI Bridge. As an example, the master card works
>> on the top half of the scene while the slave card works on the bottom
>> half. When the slave card is done, it sends its output to the master
>> card, which combines the two images to form one and then outputs the
>> final render to the monitor.
>>
>> This does not agree with your hypothesis at all.
>>
>>
>> jabloomf1230 wrote:
>>> Why don't you post your thoughts on a video card website and see what
>>> kind of response that you get? SLI does not allow crossover use of
>>> RAM from one card to the other. For example, I have a PC with a 7950
>>> GX2, which has two GPUs running in SLI. The card has 1 GB of RAM, but
>>> all the posted specs indicate clearly that each GPU has access to it
>>> own 512 MB of RAM.
>>>
>>> All SLI does is to either a) have the two GPUs render half of each
>>> frame or b) have the two GPUs render alternate frames. Scaling up
>>> tri- and Quad-SLI just splits things up in thirds or quarters. There
>>> is no magic to any of this and if a program is not SLI capable, it
>>> uses only the primary GPU. The other GPU does nothing unless you want
>>> it to drive a second monitor in multi-monitor mode.
>>>
 
Re: OT. . .

Conceptually, though, if software is polling for 'videoRamSize' will it
report the combined RAM of both cards irregardless of the software being SLI
aware, or not?

It seems clear that the SLI enabled video driver will delegate the job the
best it knows how, and the software wouldn't have to care - but if you want
to know?

I must confess, what I so naively thought was an innocent enough question
has turned into something profundly philosophical. Doubt is entering the
mind.

I will try and investigate further.


Tony. . .


"Theo" <theo@discussions.microsoft.com> wrote in message
news:%23wKbnZ2gIHA.5160@TK2MSFTNGP05.phx.gbl...
> According to nVidia, and a few other sources, the fact that
> you have 2 graphics cards in a SLI configuration is totally
> transparent to the programs; i.e., they don't know, and
> don't care, whether there is one, two, or four video cards.
> The program is feeding the video information to the
> video system and that's all it know. The video software,
> along with the hardware, decide how to process the information.
>
> And, that's exactly what I quoted below:
>
> >> Both cards are given the same part of the 3D scene to

> render, but
> >> effectively half of the work load is sent to the slave

> card through a
> >> connector called the SLI Bridge. As an example, the

> master card works
> >> on the top half of the scene while the slave card works

> on the bottom
> >> half. When the slave card is done, it sends its output

> to the master
> >> card, which combines the two images to form one and then

> outputs the
> >> final render to the monitor.

>
>
> jabloomf1230 wrote:
> > Theo,
> >
> > I'm not sure what your point is? The original question was what happens
> > when an SLI-enabled system encounters a program that is not SLI aware.
> > Are you saying that in that circumstance, both GPUs are still used by
> > the software to render graphics? Or that if only one GPU is use, it can
> > use the RAM on the second card for loading textures, etc.?
> >
> > Jay
> >
> > Theo wrote:
> >> I think you need to research SLI a bit more. Just an extract:
> >>
> >> Both cards are given the same part of the 3D scene to render, but
> >> effectively half of the work load is sent to the slave card through a
> >> connector called the SLI Bridge. As an example, the master card works
> >> on the top half of the scene while the slave card works on the bottom
> >> half. When the slave card is done, it sends its output to the master
> >> card, which combines the two images to form one and then outputs the
> >> final render to the monitor.
> >>
> >> This does not agree with your hypothesis at all.
> >>
> >>
> >> jabloomf1230 wrote:
> >>> Why don't you post your thoughts on a video card website and see what
> >>> kind of response that you get? SLI does not allow crossover use of
> >>> RAM from one card to the other. For example, I have a PC with a 7950
> >>> GX2, which has two GPUs running in SLI. The card has 1 GB of RAM, but
> >>> all the posted specs indicate clearly that each GPU has access to it
> >>> own 512 MB of RAM.
> >>>
> >>> All SLI does is to either a) have the two GPUs render half of each
> >>> frame or b) have the two GPUs render alternate frames. Scaling up
> >>> tri- and Quad-SLI just splits things up in thirds or quarters. There
> >>> is no magic to any of this and if a program is not SLI capable, it
> >>> uses only the primary GPU. The other GPU does nothing unless you want
> >>> it to drive a second monitor in multi-monitor mode.
> >>>
 
Re: OT. . .

As you know, Windows provides for the infamous >3 GB "memory hole" to
accommodate the hardware memory address needs. For example, my 7950 GX2
uses one block of 512 MB of addresses for one GPU and a second distinct
512 MB block for the other GPU. Like I said before, I'm open to
suggestions that somehow all the video RAM addresses are one big happy
block, but everything from nVidia indicates otherwise. The drivers just
aren't that sophisticated. Nor is the way that they handle SLI. I'm open
to any suggestions, links, etc. indicating otherwise, but if you look at
threads on this topic on such reputable graphics websites such as Guru
of 3D and nVNews, you will see that there is certainty (as best as can
be expected from a bunch of video gamers) as to this issue.

The reason that this issue surfaced at all was that when the original
7900GX2 and 7950GX2 models came out, they were advertised with a (at
that time) "whopping" 1GB of video RAM. This was subsequently proven to
be "marchitecture", as nVidia was combining the video RAM for each GPU
as if any one GPU could use all of it. The SLI solution chosen by nVidia
is simple, if not elegant. You scale up by alternating frames or equal
proportions of a frame and that's that. Even with that simple concept,
Windows SLI drivers have been buggy from the beginning. And as you know,
Vista didn't support SLI for a long time after its release, driving many
potential customers away.

As to the non-SLI issue, there is a wonderful little program called
rthdribl, which has been used in the past to stress GPUs, especially
those that are over clocked. Unfortunately it does not run in any SLI
mode and in fact, if you try to run it in any SLI mode, it even runs
slower than on one GPU. If you monitor GPU temperatures, while running
rthdribl maxed out, you can see that the temperature of GPU0 will warm
up nicely and the temperature of GPU1 will stay just above the idle
value. This doesn't address the shared RAM issue, but it does
demonstrate that non-SLI aware programs only rely on one GPU.

Tony Sperling wrote:
> Conceptually, though, if software is polling for 'videoRamSize' will it
> report the combined RAM of both cards irregardless of the software being SLI
> aware, or not?
>
> It seems clear that the SLI enabled video driver will delegate the job the
> best it knows how, and the software wouldn't have to care - but if you want
> to know?
>
> I must confess, what I so naively thought was an innocent enough question
> has turned into something profundly philosophical. Doubt is entering the
> mind.
>
> I will try and investigate further.
>
>
> Tony. . .
>
>
> "Theo" <theo@discussions.microsoft.com> wrote in message
> news:%23wKbnZ2gIHA.5160@TK2MSFTNGP05.phx.gbl...
>> According to nVidia, and a few other sources, the fact that
>> you have 2 graphics cards in a SLI configuration is totally
>> transparent to the programs; i.e., they don't know, and
>> don't care, whether there is one, two, or four video cards.
>> The program is feeding the video information to the
>> video system and that's all it know. The video software,
>> along with the hardware, decide how to process the information.
>>
>> And, that's exactly what I quoted below:
>>
>> >> Both cards are given the same part of the 3D scene to

>> render, but
>> >> effectively half of the work load is sent to the slave

>> card through a
>> >> connector called the SLI Bridge. As an example, the

>> master card works
>> >> on the top half of the scene while the slave card works

>> on the bottom
>> >> half. When the slave card is done, it sends its output

>> to the master
>> >> card, which combines the two images to form one and then

>> outputs the
>> >> final render to the monitor.

>>
>>
>> jabloomf1230 wrote:
>>> Theo,
>>>
>>> I'm not sure what your point is? The original question was what happens
>>> when an SLI-enabled system encounters a program that is not SLI aware.
>>> Are you saying that in that circumstance, both GPUs are still used by
>>> the software to render graphics? Or that if only one GPU is use, it can
>>> use the RAM on the second card for loading textures, etc.?
>>>
>>> Jay
>>>
>>> Theo wrote:
>>>> I think you need to research SLI a bit more. Just an extract:
>>>>
>>>> Both cards are given the same part of the 3D scene to render, but
>>>> effectively half of the work load is sent to the slave card through a
>>>> connector called the SLI Bridge. As an example, the master card works
>>>> on the top half of the scene while the slave card works on the bottom
>>>> half. When the slave card is done, it sends its output to the master
>>>> card, which combines the two images to form one and then outputs the
>>>> final render to the monitor.
>>>>
>>>> This does not agree with your hypothesis at all.
>>>>
>>>>
>>>> jabloomf1230 wrote:
>>>>> Why don't you post your thoughts on a video card website and see what
>>>>> kind of response that you get? SLI does not allow crossover use of
>>>>> RAM from one card to the other. For example, I have a PC with a 7950
>>>>> GX2, which has two GPUs running in SLI. The card has 1 GB of RAM, but
>>>>> all the posted specs indicate clearly that each GPU has access to it
>>>>> own 512 MB of RAM.
>>>>>
>>>>> All SLI does is to either a) have the two GPUs render half of each
>>>>> frame or b) have the two GPUs render alternate frames. Scaling up
>>>>> tri- and Quad-SLI just splits things up in thirds or quarters. There
>>>>> is no magic to any of this and if a program is not SLI capable, it
>>>>> uses only the primary GPU. The other GPU does nothing unless you want
>>>>> it to drive a second monitor in multi-monitor mode.
>>>>>

>
>
 
Re: OT. . .

Hmm. . .here is, finally one answer that seems to make sense to me:

*****
SLI does not double the amount of video ram. Your available amount will
be equal to the lowest amount on your cards. So if you put a 256mb card
with a 512mb card, you would stil only have 256mb of video memory
available.

The reason for this is because each card renders a different frame, so
they
both have to have the exact same information in their memory to do the
work. The information gets copied to each card, not doubled. This is a
common SLI misconception.
*****

The FAQ on the nVidia SLI Forum mentions the different schemes for dividing
the display between the two cards, the one that divides the screen in two
halves is said to be more computationally intensive because it must
continuously calculate the spot where the division is applied (inserted -
being imposed?). This would mean that the data being calculated on would
have to be in memory on both cards at all times, I guess. Great! Each day, a
new lesson learned!

NB! - the FAQ is also mentioning that if you are not heavily into 'Gaming'
and primarily using your machine for this purpose, SLI will not be
economically attractive!!!

(thanks, FAQ!)


Tony. . .


"jabloomf1230" <jabloomf@nycap.rr.com> wrote in message
news:%23K2EWx8gIHA.6032@TK2MSFTNGP03.phx.gbl...
> As you know, Windows provides for the infamous >3 GB "memory hole" to
> accommodate the hardware memory address needs. For example, my 7950 GX2
> uses one block of 512 MB of addresses for one GPU and a second distinct
> 512 MB block for the other GPU. Like I said before, I'm open to
> suggestions that somehow all the video RAM addresses are one big happy
> block, but everything from nVidia indicates otherwise. The drivers just
> aren't that sophisticated. Nor is the way that they handle SLI. I'm open
> to any suggestions, links, etc. indicating otherwise, but if you look at
> threads on this topic on such reputable graphics websites such as Guru
> of 3D and nVNews, you will see that there is certainty (as best as can
> be expected from a bunch of video gamers) as to this issue.
>
> The reason that this issue surfaced at all was that when the original
> 7900GX2 and 7950GX2 models came out, they were advertised with a (at
> that time) "whopping" 1GB of video RAM. This was subsequently proven to
> be "marchitecture", as nVidia was combining the video RAM for each GPU
> as if any one GPU could use all of it. The SLI solution chosen by nVidia
> is simple, if not elegant. You scale up by alternating frames or equal
> proportions of a frame and that's that. Even with that simple concept,
> Windows SLI drivers have been buggy from the beginning. And as you know,
> Vista didn't support SLI for a long time after its release, driving many
> potential customers away.
>
> As to the non-SLI issue, there is a wonderful little program called
> rthdribl, which has been used in the past to stress GPUs, especially
> those that are over clocked. Unfortunately it does not run in any SLI
> mode and in fact, if you try to run it in any SLI mode, it even runs
> slower than on one GPU. If you monitor GPU temperatures, while running
> rthdribl maxed out, you can see that the temperature of GPU0 will warm
> up nicely and the temperature of GPU1 will stay just above the idle
> value. This doesn't address the shared RAM issue, but it does
> demonstrate that non-SLI aware programs only rely on one GPU.
>
> Tony Sperling wrote:
> > Conceptually, though, if software is polling for 'videoRamSize' will it
> > report the combined RAM of both cards irregardless of the software being

SLI
> > aware, or not?
> >
> > It seems clear that the SLI enabled video driver will delegate the job

the
> > best it knows how, and the software wouldn't have to care - but if you

want
> > to know?
> >
> > I must confess, what I so naively thought was an innocent enough

question
> > has turned into something profundly philosophical. Doubt is entering the


> > mind.
> >
> > I will try and investigate further.
> >
> >
> > Tony. . .
> >
> >
> > "Theo" <theo@discussions.microsoft.com> wrote in message
> > news:%23wKbnZ2gIHA.5160@TK2MSFTNGP05.phx.gbl...
> >> According to nVidia, and a few other sources, the fact that
> >> you have 2 graphics cards in a SLI configuration is totally
> >> transparent to the programs; i.e., they don't know, and
> >> don't care, whether there is one, two, or four video cards.
> >> The program is feeding the video information to the
> >> video system and that's all it know. The video software,
> >> along with the hardware, decide how to process the information.
> >>
> >> And, that's exactly what I quoted below:
> >>
> >> >> Both cards are given the same part of the 3D scene to
> >> render, but
> >> >> effectively half of the work load is sent to the slave
> >> card through a
> >> >> connector called the SLI Bridge. As an example, the
> >> master card works
> >> >> on the top half of the scene while the slave card works
> >> on the bottom
> >> >> half. When the slave card is done, it sends its output
> >> to the master
> >> >> card, which combines the two images to form one and then
> >> outputs the
> >> >> final render to the monitor.
> >>
> >>
> >> jabloomf1230 wrote:
> >>> Theo,
> >>>
> >>> I'm not sure what your point is? The original question was what

happens
> >>> when an SLI-enabled system encounters a program that is not SLI aware.
> >>> Are you saying that in that circumstance, both GPUs are still used by
> >>> the software to render graphics? Or that if only one GPU is use, it

can
> >>> use the RAM on the second card for loading textures, etc.?
> >>>
> >>> Jay
> >>>
> >>> Theo wrote:
> >>>> I think you need to research SLI a bit more. Just an extract:
> >>>>
> >>>> Both cards are given the same part of the 3D scene to render, but
> >>>> effectively half of the work load is sent to the slave card through a
> >>>> connector called the SLI Bridge. As an example, the master card works
> >>>> on the top half of the scene while the slave card works on the bottom
> >>>> half. When the slave card is done, it sends its output to the master
> >>>> card, which combines the two images to form one and then outputs the
> >>>> final render to the monitor.
> >>>>
> >>>> This does not agree with your hypothesis at all.
> >>>>
> >>>>
> >>>> jabloomf1230 wrote:
> >>>>> Why don't you post your thoughts on a video card website and see

what
> >>>>> kind of response that you get? SLI does not allow crossover use of
> >>>>> RAM from one card to the other. For example, I have a PC with a 7950
> >>>>> GX2, which has two GPUs running in SLI. The card has 1 GB of RAM,

but
> >>>>> all the posted specs indicate clearly that each GPU has access to it
> >>>>> own 512 MB of RAM.
> >>>>>
> >>>>> All SLI does is to either a) have the two GPUs render half of each
> >>>>> frame or b) have the two GPUs render alternate frames. Scaling up
> >>>>> tri- and Quad-SLI just splits things up in thirds or quarters. There
> >>>>> is no magic to any of this and if a program is not SLI capable, it
> >>>>> uses only the primary GPU. The other GPU does nothing unless you

want
> >>>>> it to drive a second monitor in multi-monitor mode.
> >>>>>

> >
> >
 
Re: OT. . .

Yep. Mostly the video RAM is for loading textures, so each card needs
its own RAM to do so.

Check out this item at Fudzilla regarding the new dual GPU nVidia
9800GX2. The authors make a "snide" comment about the amount of video RAM:

http://www.fudzilla.com/index.php?option=com_content&task=view&id=6251&Itemid=1

J

Tony Sperling wrote:
> Hmm. . .here is, finally one answer that seems to make sense to me:
>
> *****
> SLI does not double the amount of video ram. Your available amount will
> be equal to the lowest amount on your cards. So if you put a 256mb card
> with a 512mb card, you would stil only have 256mb of video memory
> available.
>
> The reason for this is because each card renders a different frame, so
> they
> both have to have the exact same information in their memory to do the
> work. The information gets copied to each card, not doubled. This is a
> common SLI misconception.
> *****
>
> The FAQ on the nVidia SLI Forum mentions the different schemes for dividing
> the display between the two cards, the one that divides the screen in two
> halves is said to be more computationally intensive because it must
> continuously calculate the spot where the division is applied (inserted -
> being imposed?). This would mean that the data being calculated on would
> have to be in memory on both cards at all times, I guess. Great! Each day, a
> new lesson learned!
>
> NB! - the FAQ is also mentioning that if you are not heavily into 'Gaming'
> and primarily using your machine for this purpose, SLI will not be
> economically attractive!!!
>
> (thanks, FAQ!)
>
>
> Tony. . .
>
>
> "jabloomf1230" <jabloomf@nycap.rr.com> wrote in message
> news:%23K2EWx8gIHA.6032@TK2MSFTNGP03.phx.gbl...
>> As you know, Windows provides for the infamous >3 GB "memory hole" to
>> accommodate the hardware memory address needs. For example, my 7950 GX2
>> uses one block of 512 MB of addresses for one GPU and a second distinct
>> 512 MB block for the other GPU. Like I said before, I'm open to
>> suggestions that somehow all the video RAM addresses are one big happy
>> block, but everything from nVidia indicates otherwise. The drivers just
>> aren't that sophisticated. Nor is the way that they handle SLI. I'm open
>> to any suggestions, links, etc. indicating otherwise, but if you look at
>> threads on this topic on such reputable graphics websites such as Guru
>> of 3D and nVNews, you will see that there is certainty (as best as can
>> be expected from a bunch of video gamers) as to this issue.
>>
>> The reason that this issue surfaced at all was that when the original
>> 7900GX2 and 7950GX2 models came out, they were advertised with a (at
>> that time) "whopping" 1GB of video RAM. This was subsequently proven to
>> be "marchitecture", as nVidia was combining the video RAM for each GPU
>> as if any one GPU could use all of it. The SLI solution chosen by nVidia
>> is simple, if not elegant. You scale up by alternating frames or equal
>> proportions of a frame and that's that. Even with that simple concept,
>> Windows SLI drivers have been buggy from the beginning. And as you know,
>> Vista didn't support SLI for a long time after its release, driving many
>> potential customers away.
>>
>> As to the non-SLI issue, there is a wonderful little program called
>> rthdribl, which has been used in the past to stress GPUs, especially
>> those that are over clocked. Unfortunately it does not run in any SLI
>> mode and in fact, if you try to run it in any SLI mode, it even runs
>> slower than on one GPU. If you monitor GPU temperatures, while running
>> rthdribl maxed out, you can see that the temperature of GPU0 will warm
>> up nicely and the temperature of GPU1 will stay just above the idle
>> value. This doesn't address the shared RAM issue, but it does
>> demonstrate that non-SLI aware programs only rely on one GPU.
>>
>> Tony Sperling wrote:
>>> Conceptually, though, if software is polling for 'videoRamSize' will it
>>> report the combined RAM of both cards irregardless of the software being

> SLI
>>> aware, or not?
>>>
>>> It seems clear that the SLI enabled video driver will delegate the job

> the
>>> best it knows how, and the software wouldn't have to care - but if you

> want
>>> to know?
>>>
>>> I must confess, what I so naively thought was an innocent enough

> question
>>> has turned into something profundly philosophical. Doubt is entering the

>
>>> mind.
>>>
>>> I will try and investigate further.
>>>
>>>
>>> Tony. . .
>>>
>>>
>>> "Theo" <theo@discussions.microsoft.com> wrote in message
>>> news:%23wKbnZ2gIHA.5160@TK2MSFTNGP05.phx.gbl...
>>>> According to nVidia, and a few other sources, the fact that
>>>> you have 2 graphics cards in a SLI configuration is totally
>>>> transparent to the programs; i.e., they don't know, and
>>>> don't care, whether there is one, two, or four video cards.
>>>> The program is feeding the video information to the
>>>> video system and that's all it know. The video software,
>>>> along with the hardware, decide how to process the information.
>>>>
>>>> And, that's exactly what I quoted below:
>>>>
>>>> >> Both cards are given the same part of the 3D scene to
>>>> render, but
>>>> >> effectively half of the work load is sent to the slave
>>>> card through a
>>>> >> connector called the SLI Bridge. As an example, the
>>>> master card works
>>>> >> on the top half of the scene while the slave card works
>>>> on the bottom
>>>> >> half. When the slave card is done, it sends its output
>>>> to the master
>>>> >> card, which combines the two images to form one and then
>>>> outputs the
>>>> >> final render to the monitor.
>>>>
>>>>
>>>> jabloomf1230 wrote:
>>>>> Theo,
>>>>>
>>>>> I'm not sure what your point is? The original question was what

> happens
>>>>> when an SLI-enabled system encounters a program that is not SLI aware.
>>>>> Are you saying that in that circumstance, both GPUs are still used by
>>>>> the software to render graphics? Or that if only one GPU is use, it

> can
>>>>> use the RAM on the second card for loading textures, etc.?
>>>>>
>>>>> Jay
>>>>>
>>>>> Theo wrote:
>>>>>> I think you need to research SLI a bit more. Just an extract:
>>>>>>
>>>>>> Both cards are given the same part of the 3D scene to render, but
>>>>>> effectively half of the work load is sent to the slave card through a
>>>>>> connector called the SLI Bridge. As an example, the master card works
>>>>>> on the top half of the scene while the slave card works on the bottom
>>>>>> half. When the slave card is done, it sends its output to the master
>>>>>> card, which combines the two images to form one and then outputs the
>>>>>> final render to the monitor.
>>>>>>
>>>>>> This does not agree with your hypothesis at all.
>>>>>>
>>>>>>
>>>>>> jabloomf1230 wrote:
>>>>>>> Why don't you post your thoughts on a video card website and see

> what
>>>>>>> kind of response that you get? SLI does not allow crossover use of
>>>>>>> RAM from one card to the other. For example, I have a PC with a 7950
>>>>>>> GX2, which has two GPUs running in SLI. The card has 1 GB of RAM,

> but
>>>>>>> all the posted specs indicate clearly that each GPU has access to it
>>>>>>> own 512 MB of RAM.
>>>>>>>
>>>>>>> All SLI does is to either a) have the two GPUs render half of each
>>>>>>> frame or b) have the two GPUs render alternate frames. Scaling up
>>>>>>> tri- and Quad-SLI just splits things up in thirds or quarters. There
>>>>>>> is no magic to any of this and if a program is not SLI capable, it
>>>>>>> uses only the primary GPU. The other GPU does nothing unless you

> want
>>>>>>> it to drive a second monitor in multi-monitor mode.
>>>>>>>
>>>

>
>
 
Back
Top