I an trying to do some basic image processing on the GPU in HLSL for DirectX for speed and have run into an interesting problem. I have figured out how to create a surface to render to (because I dont want to display to the screen) that is the exact same size as my input image. If I create a simple pixel shader that simply takes the color from TEXCOORD0 from the input texture and outputs to the output texture I would expect the input and output images to be identicle (because their dimensions are identicle). However the output image looks slightly blurry compared to the input image making me think that directX did not process the input image on a pixel-by-pixel basis, but rather did some sort of subsampling on the input image and interpolated that to create the output image. Has anyone else dealt with this problem or have any ideas? Im basically trying to get the video card to do pixel-by-pixel image processing on an input image. Thanks