17

An undetectable ML-based aimbot that visually recognizes enemies and your crosshairs in images copied from the GPU head, and produces emulated mouse movements on the OS-level to aim for you.

Undetectable because it uses the same api to retrieve images as gameplay streaming software, whereas almost all existing aimbots must somehow directly access the memory of the running game.

Comments
  • 3
    I christen it ML.GBot-9001
  • 2
    @legionfrontier and lo, it was leet.
  • 2
    Let me know if this becomes real.
  • 2
    @ewpratten certainly! I'm currently manually tagging a huge dataset of captured images from a game I play, but it's tough work, and has to be done before I can even start training an algorithm on it. It needs to be done before the algorithm is chosen, because you don't know what you're dealing with until you've really been intimate (heheh) with the data
  • 1
    What programming language are you using?
  • 1
    @wholl0p well, the frames will have to be retrieved through the nvidia c-api, or equivalent AMD api, which points to slightly lower-level languages.

    That's at runtime though. For convenience I think I want the runtime and learn-time systems to be written in the same language. Probably C++, Kotlin or Python.

    I'm leaning toward C++/Kotlin because we'll need high performance, processing 60 frames per second or even if we turn down aim frequency, at least 16 frames per second.

    Maybe Rust but again I have decided almost nothing for certain about the learning system because I still need to add manual tags to my current dataset.
  • 2
    But cheating at video games though. :-/
    Besides your read from the graphics head may be pretty safe from detection, but the mouse emulation is still easy to scan for.
  • 1
    @uberblah Are you familiar with ML? I'm asking, because the performance gain of lower level languages should be negligible, since you're calculating on the GPU anyways.
    If you're new to the field, I strongly recommend you to use one of the established frameworks (Tensorflow, Torch, Theano...) which do most of the GPU optimization for you already (and use low level implementations of the relevant ML functions).

    Actually, you should get better performance and maintainability with one of these frameworks compared to implementing the whole thing in C/Rust. You might need some kind of wrapper for a low level method which extracts the frames from the VRAM though (not too sure about that, since I didn't do a lot of GPU computing, besides ML).
  • 1
    @theCalcaholic Either way I wouldn't write from scratch. Would use tensor flow C++ probably, or one of those Java apis if in Kotlin, or any framework in Python.

    I need to be able to respond 60 times per second to something that happens on the GPU, and I don't want to just assume python will be enough. I'll try it, of course.
  • 0
    @kwilliams good point. Might have to bounce the mouse move commands off an attached Arduino that can emulate PS/2 keyboards

    That might slow it down too much to be useful...

    Anyway, cheating at games isn't the goal, just POC that we can no longer count on games not being cheated
  • 1
    @uberblah Oh, I wasn't aware there's a C++ API for Tensor Flow... To my knowledge it's python only.
  • 0
    @theCalcaholic my understanding (which may be completely wrong seeing as I've never used the C++ api) is that tensor itself is written in C++, and that the python framework calls into that. The C++ library, in turn, makes calls to Nvidia's cudnn library for GPU interactions
Add Comment