DirectX with GDI+

markbiddlecom

Member
Joined
Jun 30, 2003
Messages
13
Location
Denver, CO
OK, so Ive been surfing a while for information on this, and Ive come up miserably short:

Im writing a simple top-down game engine with Direct3D 9, and I was going to just use GDI+ for some of the special effects drawing code, because I know how to work with it and I like the interfaces. Anyway, I found out pretty fast that you cant draw between the BeginScene and EndScene methods, which makes sense, but I was having initial success by drawing the graphics after executing the Present method.

At least, until I went to full screen view. Now, the GDI+ drawing never shows up at all. Ive moved the code around, slept on the current thread after drawing, dealt with synchronous and asynchronous code, and I just cant figure out whats going on.

Anyway, I gave up on that to go deal with other issues (AI, scripting, etc.), and stumbled across some methods in the managed API that I thought would do the job for me: Device.getRenderTarget (returns a Direct3D.Surface object), Surface.getGraphics, and Surface.releaseGraphics. (Dont quote me on the names, but I think theyre right.)

Well, I figured that I could grab a reference to one of the surfaces, snatch a graphics object from the surface, draw some old fashioned lines, and then release the graphics reference. The only problem, as I mentioned earlier, is that I cant find any documentation whatsoever. Microsofts docs for getGraphics says "Returns a GDI+ Graphics instance," or something to that effect.

Ive placed the code just about everywhere I can think of in relation to the BeginScene/EndScene/Present block, and I cant seem to execute the getGraphics method without getting a vague DirectX exception.

So it is that I found my way to this forum, and Im practically on my knees here begging for help. Its either this or try to learn about whatever functionality the Direct3D API offers for drawing semitransparent lines and other such things. Or just move to GDI+ for the whole blasted engine...

Thanks in advance,
Mark
 
I think the surfaces also support a GetDC method for using the GDI functions (not GDI+).

But, to help with the errors (since the dx9 docs are mostly incomplete), use dbmon. Heres what to do:
Before you open Visual Studio, go to your DX9 install folder. Then goto bin\DXUtils and run dbmon.exe. It will show a console-like window with some text. Leave this open.

Now open your project in Visual Studio. Make sure youre in Debug mode and run the application. Set a breakpoint on any error lines you have (or just after) or just exit the app. If you now look at your console window (dbmon), youll see (hopefully) a ton of good info about whats going on. The messages often reference invalid params and constant values in the C++ naming convention (all caps for the constant names instead of the enum names). But, you should be able to figure out whats wrong.

Good luck!
-Ner
 
!

EDIT

After some digging and additional playing around, Ive managed to answer my own question.

Basially, getGraphics() seems to be a GDI+ wrapper for the GetDC method. The GetDC method itself is a wrapper for the process of locking a rectangle on the render target and then dealing with a graphics stream.

Now, my knowledge of the Graphics class and GDI+ is limited, but my understanding is that it still works with old-fashioned device contexts under its shiny new exterior (perhaps at the pixel level), and that you can essentially wrap a Graphics object around any device context, so this all makes sense. I think it does, anyway.

Anyway, in order to use LockRectangle, GetDC, and GetGraphics, you need to create a lockable back buffer, as the original error message alluded to. This can be done by specifying the PresentFlag.LockableBackBuffer flag to the PresentFlag member of the PresentParameters structure.

The Microsoft documentation notes that doing so will cause a performance penalty on some cards. I havent done any benchmarking myself yet, but Im doing relatively simple 2D emulation (lol--never thought Id say "2D emulation") and Im not too worried about the performance.

Thanks again for your help :D!

-Mark

ORIGINAL MESSAGE

Wow! Thats actually really helpful--in that it makes me see how much Ive yet to learn :p.

Ive got a collection of errors and warnings here. Some of them seem to be unrelated --

2644: Direct3D9: (WARN): Device that was created without D3DCREATE_MULTITHREADED is being used by a thread other than the creation thread.

-- and then Ive got this beauty:

2644: Direct3D9: (ERROR) :For D3DPOOL_DEFAULT surfaces, GetDC requires the surface be either a lockable render target, an offscreen plain, or created with D3DUSAGE_DYNAMIC.

This is definetly an error message I can decipher--so thanks for your help!

...aaaaaaaand...

Since Im here, Ill try picking your brain a bit extra: what are the consequences of creating a device with the MULTITHREADED flag--is there a reason I wouldnt want to do this. As a little extra information, Ive got a class in my app called a RenderPipeline to which other objects are attached. The pipeline calls prerender, render, and postrender objects in sequence on its own thread. As the error indicates, that thread is not the same one that creates the device. However, all accesses to the device are performed on the new thread.

Secondly, Im having some trouble determining how to create a "lockable render target," or even how to specify D3DUSAGE_DYNAMIC for the surfaces creation. Is this a flag I have to include when Im creating the presentation parameters?

Also, it seems as though creating a graphics device just uses the DC afterall--is it therefore true that if I dug around for info on the GetDC method Id find something more to help me here?

-Mark
 
Last edited by a moderator:
Whew, someone who types as much as me :)

Im not sure about the effects of marking textures as multithreaded so I cant offer any help. Ive used the Dynamic usage before. For my purposes, I was locking a copy of the original texture and tweaking the bits. Heres the code I used to copy and lock a texture:
C#:
SurfaceDescription desc;
desc = textureOrig.GetLevelDescription(0);
Texture textureCopy = new Texture(dev, desc.Width, desc.Height, 1, 0, desc.Format, Pool.Managed);
Surface dst = textureCopy.GetSurfaceLevel(0);
Surface src = textureOrig.GetSurfaceLevel(0);
SurfaceLoader.FromSurface(dst, src, Filter.None, 0);
desc = textureOrig.GetLevelDescription(0);

// Get the bits for the surface and re-color them
ColorType[] c = (ColorType[])dst.LockRectangle(typeof(ColorType), LockFlags.None, desc.Width * desc.Height);

The ColorType is a struct I defined that I KNOW matches the format of the texture Im loading. I know because I tweaked that texture after loading to ensure its in the proper format. If you use the built-in methods to load a texture from a file or resource, it only takes the pixel format as a hint so you cant count on it.

Good Luck!

-Nerseus
 
Code?

Heres some sample code to do what weve been talking about, for anybody it may help:

[VB]
Public Sub drawWithGDIPlus()

Dim device As Microsoft.DirectX.Direct3D.Device
Dim verts As VertexBuffer

Well use these to variables in our GDI+ drawing.
Dim surface As Microsoft.DirectX.Direct3D.Surface
Dim g As System.Drawing.Graphics

Dim pp As New Microsoft.DirectX.Direct3D.PresentParameters

Set up your normal presentation parameters here
My experience tells me that this method works both in
windowed and full screen modes.

Apply you normal flags to the pp.PresentFlag member, and
then add this flag:
pp.PresentFlag = pp.PresentFlag And _
Microsoft.DirectX.Direct3D.PresentFlag.LockableBackBuffer

Note that I used And because Im assuming you may be using
other flags. If not, get rid of the the first part of this
statement.

Now create your device as you normally would, using pp as
your present parameters.

Now, begin your scene, as youre accustomed to doing...
device.BeginScene()

Now perform your drawing code as per normal...

When you need it, intersperse this code to use GDI+ to draw
over the current contents of the back buffer:

surface = device.GetRenderTarget(0)
g = surface.GetGraphics()

Use g to draw to the surface. Youll need to know the
surface dimensions before-hand.

For example,
g.DrawLine( _
New System.Drawing.Pen(System.Drawing.Color.Black), _
0, _
0, _
1, _
1 _
)

When youre done with the GDI drawing, make sure to
perform the following call:
surface.ReleaseGraphics()

Then clear up the graphics object. This isnt
required, but it can help:
g.Dispose()
g = Nothing

surface.Dispose()
surface = Nothing

Its important to remember that you cant intermix GDI+ and
DirectX drawing code; doing so will result (in the best case)
in a runtime error. However, it would appear to me that you
can use GDI+ code at any stage and more than once during
a single scene.

device.EndScene()

And, of course, make sure to clean up the DirectX objects you
used.
verts.Dispose()
device.Dispose()

verts = Nothing
device = Nothing

End Sub
[/VB]
 
Last edited by a moderator:
This is pretty good but eats up cpu, I also tried

PHP:
Private BackSurface As Direct3D.Surface

dev.BeginScene()

Dim window As Bitmap = New Bitmap(rect.Width, rect.Height) bitmap to hold graphics
Dim surface As Graphics = Drawing.Graphics.FromImage(window) holds graphics till ready to blit

surface.DrawLine( _
    New System.Drawing.Pen(System.Drawing.Color.Black), _
    0, _
    0, _
    1, _
    1 _  
   )



BackSurface = BackSurface.FromBitmap(dev, window, Pool.Default)


dev.StretchRectangle(BackSurface, New Rectangle(0, 0, AForm.ClientRectangle.Width, AForm.ClientRectangle.Height), BackBuffer, New Rectangle(0, 0, AForm.ClientRectangle.Width, AForm.ClientRectangle.Height), TextureFilter.None) this is where the surface is copied to the backbuffer
  
dev.EndScene()
dev.Present()


This is also cpu intensive. What I would like to do is use DirectDraw to draw all the circles and squares then some how present that on the direct3d device. Anyone have any idea how to do that?
 
The performance problem here is not CPU. There are two problems:

1. Bus traffic. You are copying the image from the backbuffer to system memory, changing it, and then copying it back. The numbers depend on your display depth and size and your bus speed, but its not free.

2. Parallelism. Remember that on most graphics cards, the 3d stuff is rendered on the video card. This can happen *while* youre doing your GDI and other stuff. When you lock the buffer, you are going to sit there until the cards done rendering, effectively forcing the GPU and CPU to run serially.

A better way to get the effect youre after (not the best, but better) is to render your GDI stuff into a separate texture and apply that texture to a two-polygon square in your 3d scene. Youll still incur bus traffic sending the texture *to* the card, but not as much, and you wont break parallelism (as much - the card will still have to wait for your texture...)
 
Back
Top