Here is my #introduction:
My name is Daniel Martens, I am a computer scientist developing and maintaining AI-enabled software systems that solve real world problems.
I do not treat AI as a panacea to all our problems, rather - due to my strong requirements and software engineering background - I am aware of the challenges, e.g., maintaining, such software systems brings.
Therefore, I am especially interested and like to exchange on the topic of #AppliedAI.
#artificialintelligence #AI #machinelearning #ML #datascience #softwareengineering #requirementsengineering
@dm hey Daniel, Welcome to Qoto. I'm a software engineer as well, but my love is algorithms.
I've noticed that most common AI libraries are python driven, so as a pet projects, I've been recreating some of them in other languages to improve versatility, and benchmark performance variances on different configurations. While python handles complex math most efficiently, I've discovered Node handles prioritization and delegation more efficiently when many jobs are queued, and creating microservices in C++ to handle comparing results from various AI libraries, allows applications to process input synchronously into various libraries, and append meta data to the other AI libraries for supplemental reprocessing.
Maybe that's confusing so here's an example.
1) A user uploads an image / streams a video / streams audio into the server. (Image)
2) Node detects media type and initializes the appropriate libraries for that media type and sends the data to them. (Tesseract) (opencv) (SciPy) (Mahotas) (Pgmagick)
3) Node knows there's a threshold of accuracy needed to deliver the results to the user, once the threshold is reached, it returns the results to the user. (opencv) (Mahotas) deliver results to user.👍
4) the real magic begins here.. (Tesseract) (SciPy) are still running anyway,
(Tesseract) found text inside of an object detected by (openCV). And (SciPy) found patterns between the text and object characteristics.
5) Node gets excited and initializes a few microservices that use that data to upgrade all libraries models.
6) Node re-runs step 2, all that metadata that all the AI learned from each other might alter the results a bit, this can be used in the future to decide if it's worth it to wait longer before delivering results to the user from particular combinations of AI.
7) if the results changed drastically enough, notify the user more detailed results are available.
It's so inefficient it's hilarious 😂
This toy could never be used in production..
And to strip all the libraries down to something that is efficient and merge them, no thanks. But I've discovered baby configurations of this are doable.
Can you think of any libraries that might be fun to make learn from each other? Let me know I'd love to try.
Have a great weekend 😁