Just added a super-cool mandlebrot explorer as a demo app for Aparapi (so runs on the GPU not CPU). I cant stop playing with it and its only a demo :)


In case anyone wants to check it out, this is my dev computer. I work on massively parallel algorithms a lot of the time so I need some really heavy duty GPUs for GPGPU (those are 4 watercooled Radeon Vega Frontier Edition GPUs) and a 32 core Ryzen 3.7 Ghz threadRipper CPU, and 64 gigs of the fastest ram around. this baby is a **beast**

inb4 why the Radeon GPUs and not NVIDIA or an Intel chip... simple, they are better for many/most applications but not all. The CPU and motherboard supports the full number of channels to maximize throughput into and out of the GPUs. So while they have fewer cores compared to a Tesla the cores dont tend to be the bottleneck, the I/O is, so i sacrificed cores to max out the I/O channels where the bottleneck tends to be. AMD and Ryzen ThreadRipper CPU were the only ones doing that.


Spent a good part of the day yesterday writing up some neat Docker images to run GPU enabled OpenCL out of. Wasnt too hard to figure out how to get the docker image the correct access to the GPU to enable the GPU acceleration or anything. In fact the tricker part was figuring out how to write the .gitlab-ci.yml file and the respective Dockerfile to be parameterized to minimize work.

Its a cool little trick I used, it basically looks at the branch or tag name to figure out how to tag the docker images. If the branch is develop it is tagged as "aparapi/aparapi-nvidia:git" and does another one as "aparapi/aparapi-amdgpu:git". Similarly if the branch is master then it will be "latest" instead of "git". However if its a tag then it uses the tag in place of it. So when using aparapi version 2.0.0 with amdgpu, and the version revision of the dockerfile, it would look like "aparapi/aparapi-amdgpu:2.0.0-1". This means minimal work for me, when I want to use a new version I just change the aparapi version, and push it to a new tag and it does all the work to compile it.

It even automatically pushes it to docker hub for me!


Yay, this week I spoke to official devs on the Ubuntu and Debian project and it seems my Aparapi Library will be in both mainline Debian and Ubuntu package managers soon!

Go / :opensource:


I just released version 2.0.0 of Aparapi!

Aparapi is an Open-source :opensource: framework for executing native Java :java: and Scala code on the GPU.



So my desktop has 32 CPUs and 64 gigs of memory, and 2x nvme 960 pro in RAID 1 configuration. Not to mention 4x Vega 64 water cooled frontier edition graphics card (for OpenCL work).

I'm not even sure I can call this thing a Desktop Computer anymore, its more like a low-end supercomputer. No matter what I throw at it it responds instantly. What a fucking beast!

An animated gif of the open-source library :opensource: doing an N-body simulation. The blue dots are low-mass and the red dots are high-mass objects. Its written in Java :java: and runs on the GPU.

Aparapi is an Open-source framework for executing native Java and Scala code on the GPU.



I finally released version 1.10.0 of Aparapi.

Aparapi is an Open-source :opensource: framework for executing native Java :java: and Scala code on the GPU.



The changelog for this release is here:


A n-body simulation I wrote using Aparapi. The red dots are high mass and the blue dots low mass. The system starts ordered but evolves into a beautiful natural looking orbit.


My favorite computer used for all my Deep Learning R&D. When idle it usually defaults to mining cryptocurrency to offset its build cost of about $10,000.

Qoto Mastodon

QOTO: Question Others to Teach Ourselves. A STEM-oriented instance.

An inclusive free speech instance.
All cultures and opinions welcome.
Explicit hate speech and harassment strictly forbidden.
We federate with all servers: we don't block any servers.