Direct3D. Something wrong with texture sampling in pixel shader on ARM device.

EDN Admin

Well-known member
Joined
Aug 7, 2010
Messages
12,794
Location
In the Machine
Hi!
Im working on ARM version of my project. Project uses pixel shaders for image processing. x86 version and x64 renders everything correctly but when using the same shaders on ARM version causes artifacts in effects rendering. I discover that pixels that is sampled from the texture have a bit wrong values. To be sure that the error is in this, I created a clean project(Visual C++ -> Windows Store -> Direct3D app template) and added reading pixels from the texture for pixel shader for CubeRenderer. Here is sampler state description:samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_CLAMP;
samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_CLAMP;
samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_CLAMP;
samplerDesc.BorderColor[0] = 0.0f;
samplerDesc.BorderColor[1] = 0.0f;
samplerDesc.BorderColor[2] = 0.0f;
samplerDesc.BorderColor[3] = 0.0f;
samplerDesc.ComparisonFunc = D3D11_COMPARISON_NEVER;
samplerDesc.Filter = D3D11_FILTER::D3D11_FILTER_MIN_MAG_MIP_LINEAR;
samplerDesc.MaxAnisotropy = 1;
samplerDesc.MaxLOD = 0;
samplerDesc.MinLOD = 0;
samplerDesc.MipLODBias = 0.0f;
texture description:textureDesc.Width = 2;
textureDesc.Height = 2;
textureDesc.MipLevels = 1;
textureDesc.ArraySize = 1;
textureDesc.Format = DXGI_FORMAT::DXGI_FORMAT_R8G8B8A8_UNORM;
textureDesc.SampleDesc.Count = 1;
textureDesc.SampleDesc.Quality = 0;
textureDesc.Usage = D3D11_USAGE::D3D11_USAGE_DEFAULT;
textureDesc.BindFlags = bindFlag;
textureDesc.CPUAccessFlags = cpuAccess;
textureDesc.MiscFlags = 0;
and code that fills texture initial data:D3D11_SUBRESOURCE_DATA resdata;
uint8_t pixval = 130;
DirectX::PackedVector::XMCOLOR c;

c.a = pixval;
c.r = pixval;
c.g = pixval;
c.b = pixval;

DirectX::PackedVector::XMCOLOR pixels[4] =
{
c, c,
c, c
};

resdata.pSysMem = pixels;
resdata.SysMemPitch = 2 * 4;
resdata.SysMemSlicePitch = 0;
texture and sampler declarations in pixel shader:Texture2D imageTexture : register(t0);
SamplerState textureSampler : register(s0);
texture sampling:float4 texdata = imageTexture.Sample(textureSampler, float2(0.5f, 0.5f));
and finally the code that test sampled value:float test = (130.0f / 255.0f);

if(texdata.x == test)
{
texdata = 1.0f;
texdata.r = 0.0f;
}
else
{
texdata = 0.0f;
}

return texdata;
This test code works correctly on x86 but not on ARM. To make it works I added next changes: increased maximum of BYTE from 255 to 256:float test = (130.0f / 256.0f);
and call of my function:texdata = ArmHackTextureSampler(texdata);
code of this function:float4 ArmHackTextureSampler(float4 texData)
{
float4 testval = texData * 256.0f;

if(testval.x >= 128.0f) testval.x--;
if(testval.y >= 128.0f) testval.y--;
if(testval.z >= 128.0f) testval.z--;
if(testval.w >= 128.0f) testval.w--;

testval /= 256.0f;

return testval;
}
I have no idea why it works on range 0...256 and not on 0...255 . Even DirectX::PackedVector::XMStoreColor uses 255 as scale factors. Next important thing is:if(testval.w >= 128.0f) testval.w--;
Somehow to values that is equal and above 128.0/256.0(for floating point representation) 1 is added, so i need to subtract it. This "if" works perfectly even on 255.0/256.0 . And subtraction of 1 is not needed for values that is smaller than 128, and again I dont know why. So the question is: why ARM version works in that way? Maybe I dont know something essential about Win 8 ARM devices ?
Here is full code for main funtion of ARM pixel shader:float4 main(PixelShaderInput input) : SV_TARGET
{
float4 texdata = imageTexture.Sample(textureSampler, float2(0.5f, 0.5f));
float test = (130.0f / 256.0f);
texdata = ArmHackTextureSampler(texdata);
if(texdata.x == test)
{
texdata = 1.0f;
texdata.r = 0.0f;
}
else
{
texdata = 0.0f;
}

return texdata;
}
And at the moment I dont know is that "hack" will help for ARM version of my project, because I need to integrate and test it.

PS. Sorry for my English.

View the full article
 
Back
Top