1

I am trying to make a shader that stalls my program as so:

#version 450

layout (local_size_x = 16, local_size_y = 16) in;

void main()
{
    while(true) {}
}

I am trying to call the pipeline associated with the shader like this:

static void GpuCompute(
    EffectFramework& frame_work,
    const std::string& shader_path)
{
    auto& pipeline = frame_work.GetPipeline(shader_path);
    auto h_interface = HardwareInterface::h_interface;
    auto& device = h_interface->GetDevice();
    auto& cmd_pool = h_interface->GetCommandPool();
    auto& cmd_buffer = h_interface->GetCmdBufferTmp();
    auto& queue = h_interface->GetQueue();

    vk::CommandBufferAllocateInfo alloc_info(
        cmd_pool, vk::CommandBufferLevel::ePrimary, 1);

    auto [result, buffers] = device.allocateCommandBuffersUnique(alloc_info);
    if(result != vk::Result::eSuccess)
        Log::RecordLogError("Failed to create command buffers");

    cmd_buffer = std::move(buffers[0]);

    vk::CommandBufferBeginInfo begin_info(
        vk::CommandBufferUsageFlagBits::eSimultaneousUse, nullptr);

    vk::FenceCreateInfo fence_create_info = {};
    fence_create_info.flags = {};
    auto[result_f, fence] = device.createFenceUnique(fence_create_info);

    if(result_f != vk::Result::eSuccess)
        Log::RecordLogError("Failed to create compute fence");

    result = cmd_buffer->begin(&begin_info);
    if(result != vk::Result::eSuccess)
        Log::RecordLogError("Failed to begin recording command buffer!");
    _SetName(device, *cmd_buffer, "compute_cmd_buffer");

    cmd_buffer->bindPipeline(vk::PipelineBindPoint::eCompute, pipeline.GetPipeline());
    cmd_buffer->dispatch(1920 / 16, 1440 / 16, 1);

    cmd_buffer->end();

    vk::SubmitInfo submit_info = {};
    submit_info.commandBufferCount = 1;
    submit_info.pCommandBuffers = &*cmd_buffer;
    queue.submit(1, &submit_info, *fence);

    device.waitForFences(1,
        &*fence,
        VK_TRUE,
        std::numeric_limits<uint64_t>::max());
}

However when I run my program it doesn't stall. I used renderdoc to make sure I was calling the shader:

enter image description here

It seems that the dispatch call is using the correct shader.

enter image description here

So why does my code run? It should get stuck computing that loop until the heat death of the universe.

The way I know it's not stalled is, I am also rendering graphics after calling the compute shader, on the same queue and on the same threa. To my understanding, commands submitted to the same queue execute sequentially so this shader should stall the entire pipeline. but I can still interact with my program just fine.

2
  • 3
    I wonder if the Vulkan driver is optimizing the entire while loop away because it's not doing any work. Could you try adding something inside the loop? Maybe writing to a uniform or even just updating a variable? Commented Jun 18, 2020 at 0:06
  • A program will do IO or change state in C++; maybe Vulkan uses similar rules. Commented Jun 18, 2020 at 0:16

1 Answer 1

4

make a shader that stalls my program

Your operating system and graphics drivers don't like it when the device responsible for the primary user interface stops responding. Neither do the users of said operating system or said graphics drivers. Therefore, they don't allow you to do that.

Any shader operation that runs for "too long" (however they define that) will be unceremoniously terminated. Your application may just seem to continue as if the dispatch executed to completion. Or you may lose the graphics context. Or a hard GPU reset may occur.

Alternatively, your shader compiler may have simply detected that your shader has no visible side-effects (ie: doesn't write anything), so it completely optimized away all of its code. Dead code elimination tends to function by working backwards from the outputs of the shader. Since this is a compute shader, its outputs would be writes to SSBOs, imageStore calls or atomic updates. Your code does none of those, so none of its code has any outputs, and therefore the shader does nothing.

The point being, you're going to have to be a lot more clever than that if you want to crash your GPU.

Sign up to request clarification or add additional context in comments.

1 Comment

Crashing the GPU is indeed the final goal

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.