Configure GPU state.
More...
Go to the source code of this file.
|
void | C3D_DepthMap (bool bIsZBuffer, float zScale, float zOffset) |
| Specifies mapping of depth values from normalized device coordinates to window coordinates.
|
|
void | C3D_CullFace (GPU_CULLMODE mode) |
| Specifies whether front-facing or back-facing facets can be culled.
|
|
void | C3D_StencilTest (bool enable, GPU_TESTFUNC function, int ref, int inputMask, int writeMask) |
| Sets front and back function and reference value for stencil testing.
|
|
void | C3D_StencilOp (GPU_STENCILOP sfail, GPU_STENCILOP dfail, GPU_STENCILOP pass) |
| Sets front and back stencil test actions.
|
|
void | C3D_BlendingColor (u32 color) |
| Sets the blend color.
|
|
void | C3D_EarlyDepthTest (bool enable, GPU_EARLYDEPTHFUNC function, u32 ref) |
|
void | C3D_DepthTest (bool enable, GPU_TESTFUNC function, GPU_WRITEMASK writemask) |
| Configures depth testing options.
|
|
void | C3D_AlphaTest (bool enable, GPU_TESTFUNC function, int ref) |
| Configures Alpha testing.
|
|
void | C3D_AlphaBlend (GPU_BLENDEQUATION colorEq, GPU_BLENDEQUATION alphaEq, GPU_BLENDFACTOR srcClr, GPU_BLENDFACTOR dstClr, GPU_BLENDFACTOR srcAlpha, GPU_BLENDFACTOR dstAlpha) |
| Configures blend functions.
|
|
void | C3D_ColorLogicOp (GPU_LOGICOP op) |
| Configures logical pixel write operation during rendering.
|
|
void | C3D_FragOpMode (GPU_FRAGOPMODE mode) |
| Configures fragment writing mode.
|
|
void | C3D_FragOpShadow (float scale, float bias) |
|
◆ C3D_AlphaBlend()
Configures blend functions.
- Note
- This causes LogicOp settings to be ignored, LogicOp and alpha blending cannot be used simultaneously.
- Parameters
-
[in] | colorEq | Specifies how source and destination colors are combined. The initial value is GPU_BLEND_ADD. |
[in] | alphaEq | Specifies how source and destination alphas are combined. The initial value is GPU_BLEND_ADD. |
[in] | srcClr | Specifies how the red, green, and blue source blending factors are computed. The initial value is GPU_SRC_ALPHA. |
[in] | dstClr | Specifies how the red, green, and blue destination blending factors are computed. The initial value is GPU_ONE_MINUS_SRC_ALPHA. |
[in] | srcAlpha | Specifies how the alpha source blending factors are computed. The initial value is GPU_SRC_ALPHA. |
[in] | dstAlpha | Specifies how the alpha destination blending factors are computed. The initial value is GPU_ONE_MINUS_SRC_ALPHA. |
◆ C3D_AlphaTest()
void C3D_AlphaTest |
( |
bool |
enable, |
|
|
GPU_TESTFUNC |
function, |
|
|
int |
ref |
|
) |
| |
Configures Alpha testing.
- Parameters
-
[in] | enable | Enables or disables alpha test. The initial value is false. |
[in] | function | Specifies the alpha comparison function. The initial value is GPU_ALWAYS. |
[in] | ref | Specifies the reference value that incoming alpha values are compared to from 0 to 0xFF. The intial value is 0. |
◆ C3D_BlendingColor()
void C3D_BlendingColor |
( |
u32 |
color | ) |
|
Sets the blend color.
- Parameters
-
[in] | color | Specifies the RGBA blend color. The initial value is 0. |
◆ C3D_ColorLogicOp()
Configures logical pixel write operation during rendering.
- Note
- This resets existing alpha blending settings, LogicOp and alpha blending cannot be used simultaneously.
- Parameters
-
[in] | op | Operation to perform when writing incomming pixels to existing framebuffer. |
◆ C3D_CullFace()
Specifies whether front-facing or back-facing facets can be culled.
- Parameters
-
[in] | mode | Specifies whether front-facing, back-facing, or no facets are candidates for culling. The inital value is GPU_CULL_BACK_CCW. |
◆ C3D_DepthMap()
void C3D_DepthMap |
( |
bool |
bIsZBuffer, |
|
|
float |
zScale, |
|
|
float |
zOffset |
|
) |
| |
Specifies mapping of depth values from normalized device coordinates to window coordinates.
- Parameters
-
[in] | bIsZBuffer | Enables or disables depth range. The initial value is true. |
[in] | zScale | Specifies mapping of depth values from normalized device coordinates to window coordinates (nearVal - farVal). The initial value is -1.0f. |
[in] | zOffset | Sets the scale and units used to calculate depth values (nearVal + polygonOffset). The initial value is 0.0f. |
◆ C3D_DepthTest()
Configures depth testing options.
- Note
- Setting the enable parameter to false will not also disable depth writes. It will instead behave as if the depth function were set to GPU_ALWAYS. To completely disable depth-related operations, the enable parameter must be false, and the writemask should be GPU_WRITE_COLOR.
- Parameters
-
[in] | enable | If true, do depth comparisons on the outgoing fragments and write to the depth buffer. The initial value is true. |
[in] | function | Specifies the depth comparison function. The initial value is GPU_GREATER. |
[in] | writemask | Configures buffer writemasks for the depth test stage. The initial value is GPU_WRITE_ALL. |
◆ C3D_FragOpMode()
Configures fragment writing mode.
- Note
- When using GPU_FRAGOPMODE_SHADOW, the secondary depth stencil texture must be disabled.
- Parameters
-
[in] | mode | Mode used for writing fragments. GPU_FRAGOPMODE_GL enables standard color/depth rendering, GPU_FRAGOPMODE_SHADOW enables writes color fragments as a D24X shadow depthmap (treated as RGBA8). |
◆ C3D_StencilOp()
Sets front and back stencil test actions.
- Parameters
-
[in] | sfail | Specifies the action to take when the stencil test fails. The initial value is GPU_STENCIL_KEEP. |
[in] | dfail | Specifies the stencil action when the stencil test passes, but the depth test fails. The initial value is GPU_STENCIL_KEEP. |
[in] | pass | Specifies the stencil action when both the stencil test and the depth test pass, or when the stencil test passes and either there is no depth buffer or depth testing is not enabled. The initial value is GPU_STENCIL_KEEP. |
◆ C3D_StencilTest()
void C3D_StencilTest |
( |
bool |
enable, |
|
|
GPU_TESTFUNC |
function, |
|
|
int |
ref, |
|
|
int |
inputMask, |
|
|
int |
writeMask |
|
) |
| |
Sets front and back function and reference value for stencil testing.
- Parameters
-
[in] | enable | Enables or disables stencil test. The initial value is false. |
[in] | function | Specifies the test function. The initial value is GPU_ALWAYS. |
[in] | ref | Specifies the reference value for the stencil test. ref is clamped to the range 02^n - 1 , where n is the number of bitplanes in the stencil buffer. The initial value is 0. |
[in] | inputMask | Specifies a mask that is ANDed with both the reference value and the stored stencil value when the test is done. The initial value is all 1's. |
[in] | writeMask | Specifies a bit mask to enable and disable writing of individual bits in the stencil planes. Initially, the mask is all 0's. |