Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix for incorrect bits per pixel set on texture reload #2138

Merged
merged 9 commits into from
Sep 13, 2024

Conversation

rh101
Copy link
Contributor

@rh101 rh101 commented Sep 12, 2024

Describe your changes

The pixel format of the FPS texture was being set incorrectly, which ended up crashing Android applications after attempting to recover from a GL context loss.

The change made simply fixes the pixel format of that texture entry in the texture cache.

The true cause of the issue was the bits per pixel setting of the cached texture being overwritten during recovery from the GL context loss.

Issue ticket number and link

#2126

Checklist before requesting a review

For each PR

  • Add Copyright if it missed:
    - "Copyright (c) 2019-present Axmol Engine contributors (see AUTHORS.md)."

  • I have performed a self-review of my code.

    Optional:

    • I have checked readme and add important infos to this PR.
    • I have added/adapted some tests too.

For core/new feature PR

  • I have checked readme and add important infos to this PR.
  • I have added thorough tests.

@rh101 rh101 marked this pull request as draft September 12, 2024 09:26
@rh101
Copy link
Contributor Author

rh101 commented Sep 12, 2024

Note this may not be the right fix it the intention is to actually use RGBA4 as the pixel format, and the error may exist elsewhere. It only happens when the texture is being recreated after a GL context loss:

void VolatileTextureMgr::reloadAllTextures()
{
...
    for (auto&& texture : _textures)
    {
        VolatileTexture* vt = texture;

        switch (vt->_cashedImageType)
        {
...
        case VolatileTexture::kImage:
        {
            vt->_texture->initWithImage(vt->_uiImage, vt->_pixelFormat);  // this call
        }
        break;

Which ends up here:

bool Texture2D::updateWithImage(Image* image, backend::PixelFormat format, int index)
{
    if (image == nullptr)
    {
        __AXLOGWITHFUNCTION("axmol: Texture2D. Can't create Texture. UIImage is nil");
        return false;
    }

    if (this->_filePath.empty())
        this->_filePath = image->getFilePath();

    int imageWidth  = image->getWidth();
    int imageHeight = image->getHeight();

    Configuration* conf = Configuration::getInstance();

    int maxTextureSize = conf->getMaxTextureSize();
    if (imageWidth > maxTextureSize || imageHeight > maxTextureSize)
    {
        AXLOGW("axmol: WARNING: Image ({} x {}) is bigger than the supported {} x {}", imageWidth, imageHeight,
              maxTextureSize, maxTextureSize);
        return false;
    }

    unsigned char* tempData               = image->getData();
    // Vec2 imageSize                        = Vec2((float)imageWidth, (float)imageHeight);
    backend::PixelFormat renderFormat     = (PixelFormat::NONE == format) ? image->getPixelFormat() : format;
    backend::PixelFormat imagePixelFormat = image->getPixelFormat();
    size_t tempDataLen                    = image->getDataLen();

#ifdef AX_USE_METAL
    //! override renderFormat, since some render format is not supported by metal
    switch (renderFormat)
    {
#    if (AX_TARGET_PLATFORM != AX_PLATFORM_IOS || TARGET_OS_SIMULATOR)
    // packed 16 bits pixels only available on iOS
    case PixelFormat::RGB565:
    case PixelFormat::RGB5A1:
    case PixelFormat::RGBA4:
#    endif
    case PixelFormat::R8:
    case PixelFormat::RG8:
    case PixelFormat::RGB8:
        // Note: conversion to RGBA8 will happends
        renderFormat = PixelFormat::RGBA8;
        break;
    default:
        break;
    }
#elif !AX_GLES_PROFILE
    // Non-GLES doesn't support follow render formats, needs convert PixelFormat::RGBA8
    // Note: axmol-1.1 deprecated A8, L8, LA8 as renderFormat, preferred R8, RG8
    switch (renderFormat)
    {
    case PixelFormat::R8:
    case PixelFormat::RG8:
        // Note: conversion to RGBA8 will happends
        renderFormat = PixelFormat::RGBA8;
    }
#endif

    if (image->getNumberOfMipmaps() > 1)
    {
        if (renderFormat != image->getPixelFormat())
        {
            AXLOGW("WARNING: This image has more than 1 mipmaps and we will not convert the data format");
        }

        // pixel format of data is not converted, renderFormat can be different from pixelFormat
        // it will be done later
        updateWithMipmaps(image->getMipmaps(), image->getNumberOfMipmaps(), image->getPixelFormat(), renderFormat, imageHeight, imageWidth, image->hasPremultipliedAlpha(), index);
    }
    else if (image->isCompressed())
    {  // !Only hardware support texture will be compression PixelFormat, otherwise, will convert to RGBA8 duraing image
       // load
        renderFormat = imagePixelFormat;
        updateWithData(tempData, tempDataLen, image->getPixelFormat(), image->getPixelFormat(), imageWidth, imageHeight, image->hasPremultipliedAlpha(), index);
    }
    else
    {
        // after conversion, renderFormat == pixelFormat of data
        updateWithData(tempData, tempDataLen, imagePixelFormat, renderFormat, imageWidth, imageHeight,
                       image->hasPremultipliedAlpha(), index);
    }

    return true;
}

The problem may be in updateWithMipmaps, which is called from updateWithData.

Looking into this some more to try and figure out why it's happening, since forcing RGBA4 for the FPS texture makes sense.

@rh101
Copy link
Contributor Author

rh101 commented Sep 12, 2024

The true problem seems to have something to do with the _bitsPerPixel, where it is 32 when it should be 16:

image

When the FPS texture is first created using pixel format RGBA4, everything is fine, and the _bitsPerPixel is 16, so the calculation for bytes per row is correct. On recovery from the GL context loss, _bitsPerPixel is never set correctly, which results in an incorrect value for the GL_UNPACK_ALIGNMENT, and the crash.

@rh101
Copy link
Contributor Author

rh101 commented Sep 12, 2024

The problem has been traced to this method:

void Texture2DGL::initWithZeros()
{
    // !!!Only used for depth stencil render buffer
    // For example, a texture used as depth buffer will not invoke updateData(), see cpp-tests 'Effect Basic/Effects
    // Advanced'.
    // FIXME: Don't call at Texture2DGL::updateTextureDescriptor, when the texture is compressed, initWithZeros will
    // cause GL Error: 0x501 We call at here once to ensure depth buffer works well. Ensure the final data size at least
    // 4 byte

    _width        = (std::max)(_width, (uint32_t)1);
    _height       = (std::max)(_height, (uint32_t)1);
    _bitsPerPixel = (std::max)(_bitsPerPixel, (uint8_t)(8 * 4));

    auto size     = _width * _height * _bitsPerPixel / 8;
    uint8_t* data = (uint8_t*)malloc(size);
    memset(data, 0, size);
    updateData(data, _width, _height, 0);
    free(data);
}

_bitsPerPixel is being set to the max of that statement, so while it may be 16, it ends up changing to 32, which causes the problem.

The Texture2DGL::initWithZeros method is called from here when it is recovering from a GL context loss:

Texture2DGL::Texture2DGL(const TextureDescriptor& descriptor)
{
    updateTextureDescriptor(descriptor);

#if AX_ENABLE_CACHE_TEXTURE_DATA
    // Listen this event to restored texture id after coming to foreground on Android.
    _rendererRecreatedListener = EventListenerCustom::create(EVENT_RENDERER_RECREATED, [this](EventCustom*) {
        _textureInfo.onRendererRecreated(GL_TEXTURE_2D);
        this->initWithZeros();
    });
    Director::getInstance()->getEventDispatcher()->addEventListenerWithFixedPriority(_rendererRecreatedListener, -1);
#endif
}

The Cocos2d-x implementation was this, which never changed the value of the _bitsPerElement, and why it never caused the problem:

void Texture2DGL::initWithZeros()
{
    auto size = _width * _height * _bitsPerElement / 8;
    uint8_t* data = (uint8_t*)malloc(size);
    memset(data, 0, size);
    updateData(data, _width, _height, 0);
    free(data);
}

@halx99 How should this be solved? Should we just use std::min for the statement?

_bitsPerPixel = (std::min)(_bitsPerPixel, (uint8_t)(8 * 4));

or alternatively, avoid setting it if it is already has a valid value (greater than 0, and less or equal to 32):

constexpr auto maxBitsPerPixel = (uint8_t)(8 * 4);
if (_bitsPerPixel > maxBitsPerPixel || _bitsPerPixel == 0)
{
    _bitsPerPixel = maxBitsPerPixel;
}

@rh101
Copy link
Contributor Author

rh101 commented Sep 12, 2024

I've used this as the fix now, unless there are better suggestions:

constexpr auto maxBitsPerPixel = (uint8_t)(8 * 4);
if (_bitsPerPixel > maxBitsPerPixel || _bitsPerPixel == 0)
{
    _bitsPerPixel = maxBitsPerPixel;
}

This will preserve a valid _bitsPerPixel value. The app no longer crashes on recovery from the GL context loss.

@rh101 rh101 marked this pull request as ready for review September 12, 2024 10:37
@smilediver
Copy link
Contributor

This doesn't seem right, why initWithZeros() is modifying _width, _height or _bitsPerPixel at all? Also floating point textures can have more than 32 bits.

@rh101 rh101 changed the title Set pixel format of cached FPS texture to correct value Fix for incorrect bits per pixel set on texture reload Sep 12, 2024
@rh101
Copy link
Contributor Author

rh101 commented Sep 12, 2024

This doesn't seem right, why initWithZeros() is modifying _width, _height or _bitsPerPixel at all? Also floating point textures can have more than 32 bits.

I'm not sure why the code was changed from the Cocos2d-x implementation, which never adjusted those values.

If floating point texture can have a more than 32 bits, then I'll change the conditional to simply check for 0.

@halx99
Copy link
Collaborator

halx99 commented Sep 12, 2024

better solution is clamp width, height, bitsPerPexel at TextureBackend::updateTextureDescriptor, and don't modify them in initWithZeros

@rh101
Copy link
Contributor Author

rh101 commented Sep 12, 2024

better solution is clamp width, height, bitsPerPexel at TextureBackend::updateTextureDescriptor, and don't modify them in initWithZeros

That can be done, but I'm concerned about this "FIXME" comment in the code:

void Texture2DGL::initWithZeros()
{
    // !!!Only used for depth stencil render buffer
    // For example, a texture used as depth buffer will not invoke updateData(), see cpp-tests 'Effect Basic/Effects
    // Advanced'.
    // FIXME: Don't call at Texture2DGL::updateTextureDescriptor, when the texture is compressed, initWithZeros will
    // cause GL Error: 0x501 We call at here once to ensure depth buffer works well. Ensure the final data size at least
    // 4 byte

    _width        = (std::max)(_width, (uint32_t)1);
    _height       = (std::max)(_height, (uint32_t)1);

Is this what you were thinking:

void TextureBackend::updateTextureDescriptor(const ax::backend::TextureDescriptor& descriptor, int /*index*/)
{
    _bitsPerPixel  = PixelFormatUtils::getBitsPerPixel(descriptor.textureFormat);
    _textureType   = descriptor.textureType;
    _textureFormat = descriptor.textureFormat;
    _textureUsage  = descriptor.textureUsage;
    _width        = (std::max)(descriptor.width, (uint32_t)1);
    _height       = (std::max)(descriptor.height, (uint32_t)1);

    if (_bitsPerPixel == 0)
    {
        _bitsPerPixel = (uint8_t)(8 * 4);
    }
}

and

void Texture2DGL::initWithZeros()
{
    // !!!Only used for depth stencil render buffer
    // For example, a texture used as depth buffer will not invoke updateData(), see cpp-tests 'Effect Basic/Effects
    // Advanced'.
    // FIXME: Don't call at Texture2DGL::updateTextureDescriptor, when the texture is compressed, initWithZeros will
    // cause GL Error: 0x501 We call at here once to ensure depth buffer works well. Ensure the final data size at least
    // 4 byte
    auto size     = _width * _height * _bitsPerPixel / 8;
    uint8_t* data = (uint8_t*)malloc(size);
    memset(data, 0, size);
    updateData(data, _width, _height, 0);
    free(data);
}

@rh101 rh101 marked this pull request as draft September 12, 2024 15:13
@halx99
Copy link
Collaborator

halx99 commented Sep 12, 2024

The comment should be The initWIthZeros is only used for RenderTarget attachments. I'm not sure do we need invoke it when android GLES context resume from lost

@halx99
Copy link
Collaborator

halx99 commented Sep 12, 2024

better solution is clamp width, height, bitsPerPexel at TextureBackend::updateTextureDescriptor, and don't modify them in initWithZeros

That can be done, but I'm concerned about this "FIXME" comment in the code:

void Texture2DGL::initWithZeros()
{
    // !!!Only used for depth stencil render buffer
    // For example, a texture used as depth buffer will not invoke updateData(), see cpp-tests 'Effect Basic/Effects
    // Advanced'.
    // FIXME: Don't call at Texture2DGL::updateTextureDescriptor, when the texture is compressed, initWithZeros will
    // cause GL Error: 0x501 We call at here once to ensure depth buffer works well. Ensure the final data size at least
    // 4 byte

    _width        = (std::max)(_width, (uint32_t)1);
    _height       = (std::max)(_height, (uint32_t)1);

Is this what you were thinking:

void TextureBackend::updateTextureDescriptor(const ax::backend::TextureDescriptor& descriptor, int /*index*/)
{
    _bitsPerPixel  = PixelFormatUtils::getBitsPerPixel(descriptor.textureFormat);
    _textureType   = descriptor.textureType;
    _textureFormat = descriptor.textureFormat;
    _textureUsage  = descriptor.textureUsage;
    _width        = (std::max)(descriptor.width, (uint32_t)1);
    _height       = (std::max)(descriptor.height, (uint32_t)1);

    if (_bitsPerPixel == 0)
    {
        _bitsPerPixel = (uint8_t)(8 * 4);
    }
}

and

void Texture2DGL::initWithZeros()
{
    // !!!Only used for depth stencil render buffer
    // For example, a texture used as depth buffer will not invoke updateData(), see cpp-tests 'Effect Basic/Effects
    // Advanced'.
    // FIXME: Don't call at Texture2DGL::updateTextureDescriptor, when the texture is compressed, initWithZeros will
    // cause GL Error: 0x501 We call at here once to ensure depth buffer works well. Ensure the final data size at least
    // 4 byte
    auto size     = _width * _height * _bitsPerPixel / 8;
    uint8_t* data = (uint8_t*)malloc(size);
    memset(data, 0, size);
    updateData(data, _width, _height, 0);
    free(data);
}

correct

@halx99 halx99 added this to the 2.2.0 milestone Sep 12, 2024
@halx99 halx99 added the bug Something isn't working label Sep 12, 2024
@smilediver
Copy link
Contributor

Why is this clamping necessary? Shouldn't these be asserts, since If you have a texture with 0 width, height or bits per pixel, then I'm certain it's a bug?

Move width, height and bitsperpixel clamping
@rh101
Copy link
Contributor Author

rh101 commented Sep 12, 2024

The comment should be The initWIthZeros is only used for RenderTarget attachments. I'm not sure do we need invoke it when android GLES context resume from lost

I'm not sure if it needs to be called on the context loss either.

By the way, would you mind if I make the following change too:

AX_DEPRECATED_ATTRIBUTE VolatileTexture* findVolotileTexture(Texture2D* tt) { return getOrAddVolatileTexture(tt); }
VolatileTexture* getOrAddVolatileTexture(Texture2D* tt);

findVolotileTexture should be "findVolatileTexture", but that method name does not represent what it actually does, which is to add a new entry if it does not find one. I suggest we deprecate the old method name (or delete it) and use getOrAddVolatileTexture. What do you think about that?

@halx99
Copy link
Collaborator

halx99 commented Sep 13, 2024

The comment should be The initWIthZeros is only used for RenderTarget attachments. I'm not sure do we need invoke it when android GLES context resume from lost

I'm not sure if it needs to be called on the context loss either.

By the way, would you mind if I make the following change too:

AX_DEPRECATED_ATTRIBUTE VolatileTexture* findVolotileTexture(Texture2D* tt) { return getOrAddVolatileTexture(tt); }
VolatileTexture* getOrAddVolatileTexture(Texture2D* tt);

findVolotileTexture should be "findVolatileTexture", but that method name does not represent what it actually does, which is to add a new entry if it does not find one. I suggest we deprecate the old method name (or delete it) and use getOrAddVolatileTexture. What do you think about that?

sure, but please use AX_DEPRECATED(2.2) or just rename for internal API without deprecated mark.

@rh101 rh101 marked this pull request as ready for review September 13, 2024 05:25
@halx99 halx99 merged commit 4c66494 into axmolengine:dev Sep 13, 2024
28 of 29 checks passed
@rh101 rh101 deleted the patch-1 branch September 13, 2024 13:19
xfbird pushed a commit to xfbird/axmol that referenced this pull request Sep 18, 2024
)

* Set pixel format of cached FPS texture to correct value

* Do not change _bitsPerPixel if it is a valid value

* Only set _bitsPerPixel if it is the default value of 0

* Assert that size is not equal to 0
Move width, height and bitsperpixel clamping

* Deprecate findVolotileTexture and replace with getOrAddVolatileTexture.

* Add missing static specifier

* Replace calls to findVolotileTexture with getOrAddVolatileTexture
xfbird pushed a commit to xfbird/axmol that referenced this pull request Sep 18, 2024
)

* Set pixel format of cached FPS texture to correct value

* Do not change _bitsPerPixel if it is a valid value

* Only set _bitsPerPixel if it is the default value of 0

* Assert that size is not equal to 0
Move width, height and bitsperpixel clamping

* Deprecate findVolotileTexture and replace with getOrAddVolatileTexture.

* Add missing static specifier

* Replace calls to findVolotileTexture with getOrAddVolatileTexture
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants