Hello experienced programmers, I ask for advice regarding my workflow.
I'm writing a lot of small programs to use on a server in order to execute several operations and calculations.
My workflow is to simply open the server, open vim and write the code; then store it directly in a stow directory if it's python or its compiled binary if it's something else.
The more I do this the more I realize this is unsustainable:
- I'm not using git, which is problematic as I often have to come back and make some adjustments.
- I cannot use a different IDE, which I would prefer.
- I don't really have proper personal backups.
The advantage of this, is that I just have to write the code and I can immediately use it.
What I would like is: write the code on my personal computer and seamlessly have the software available in my server ~/.local/bin to be run.
I do not want to have to run rsync 20 times in order to do this.
Do you know how I could set up my system in order to achieve this?
Thank you.
@rastinza I do some work like you describe, and I don't know how I'd live without **git**. the version control is one thing, but to address your issue, it's extremely easy to synchronize between different copies of local repositories once your git repo is hosted somewhere
the general workflow is as follows...
* **write** code on your local system (e.g. laptop), using whatever tools you desire
* **commit** your code to your local repo for version control
* **push** those commits to a remote host (many are available, my workplace uses gitlab and I've not had a problem with that)
* `ssh` to your compute server or similar
* **pull** your commits from the remote host to the server
this works very easily for a single developer, but also scales to more complex cases where multiple folks are contributing to the code
you also get all the version control benefits of being able to see when exactly which changes were made and revert things if need be. commit messages are also great to leave your future self a comment as to why you made a certain change
there's a learning curve with git, but only minimal knowledge is needed to get started and see the benefit, so I can definitely recommend getting familiar with git!
@freeDomForTooting Thanks, I do use git already.
When I write large software I of course have a repository.
My problem here is with small little scripts of some 20-30 lines that I need as utilities to perform several operations.
In this case the most important thing to me is that it should be quick to edit the behavior and immediately run it, I don't want to have to perform several different operations before I can actually run the script, in this case these would be:
- commit
- push
- pull
- copy to stow directory
- make executable
- stow the software
- run the software
Which is a bit too much in my opinion, and at that point I'll just write my stuff in vim on the server.
Someone mentioned git hooks, and that looks like a great solution though.
@rastinza ah ok I was telling you what you already know 😂 sorry!
I haven't used git hooks before but it sounds like they fit your case well, and I'll have to remember to check them out for myself too 🤭 hope it works out!
@diesch Surely a solution, but doing that over a controlled connection scares me a bit.
@valhalla No, this is not a good solution to the problem I've got for 2 reasons:
- I have a bunch of stuff in my ~/.local/bin, many of which are binaries and I often add or remove stuff from there. Most of that stuff is not developed by me, thus I don't want that to be in my git repository, moreover this would include whichever libraries I'm using/local environments in my bin directory.
And then, of course I would have to keep committing and pulling every time I edit something.
@rastinza seems like a classic CI/CD example.
One could have used a hook on the push to main which would automatically build and then send to server. This is what still missing for my setup.
I do something similar, where I wrote my own bash script that would build the code and then rsync to server via ssh.
So, what I ended up doing is the following: I have a folder on my local computer which is tracked through git and I sync this with a remote server.
On the server on which I need the software I do not have the git repository.
In the git repository I have all the documents I'm developing, a lib folder, a build folder, a deploy folder and a virtual environment.
I have a post-commit hook which builds all the software and in the deploy folder I have symlinks to all the executables I want to have on the server.
The hook also pushed everything in the deploy folder and everything in the lib folder to a stow directory on the server through rsync.
At this point I can update any existing software by simply changing the code and committing.
To add new programs I have to perform an extra step, which is to restow the folder on the server.
This works out quite nicely for me, since virtual environment and all build dependencies stay out of the way and I easily get everything on the server.
I might as well start writing some tests for my software, but that will come a bit later...