These are public posts tagged with #wget. You can interact with them if you have an account anywhere in the fediverse.
Assigning your copyright to the FSF helps defend the GPL and keep software free. Thanks to Rongzhao Yan, Paolo De Santis, and Maximilian Küffner for assigning their copyright to the FSF! More at: https://u.fsf.org/463 #Emacs \#Wget #CopyrightAssignments
Assigning your copyright to the FSF helps defend the GPL and keep software free. Thanks to Rongzhao Yan, Paolo De Santis, and Maximilian Küffner for assigning their copyright to the FSF! More at: https://u.fsf.org/463 #Emacs \#Wget #CopyrightAssignments
New and Improved!
Choose your flavour, cURL or wget!
#curl
```
curl -o- #https://gist.githubusercontent.com/ajaxStardust/674b5d86f1f4386e72937a607e263608/raw/install.sh > ~/install_adb_by_ajaxStardust.sh
chmod 755 ~/install_adb_by_ajaxStardust.sh
```
(Octothorpe placed, in hopes that st nick soon would eliminate any unwanted html anchors as hyperlink. i.e. remove it)
**Note:** it will place the app in your file system, at the location (if able without sudo):
/var/www/html/mydocs/11011101/
#wget
```
wget -qO- #https://gist.githubusercontent.com/ajaxStardust/674b5d86f1f4386e72937a607e263608/raw/install.sh > ~/install_adb_by_ajaxStardust.sh
chmod 755 ~/install_adb_by_ajaxStardust.sh
```
UPDATE: i changed it so it's not going to do something automatically. you have to adjust the $vars first. one line i think.
Still easy.
Any Dir Browser. Contribute to ajaxStardust/AnnieDeBrowsa…
GitHubAssigning your copyright to the FSF helps defend the GPL and keep software free. Thanks to Rongzhao Yan, Paolo De Santis, and Maximilian Küffner for assigning their copyright to the FSF! More at: https://u.fsf.org/463 #Emacs \#Wget #CopyrightAssignments
The Week in Review, Edition 83 (2025-11)
Topics:
Discover a new tool for the woodturning lathe.
Explore how LLMs can enhance the programming workflow.
Rating Graph for diving into insightful statistical analyses of TV shows and movies.
A must-watch show: “Schitt’s Creek”
Embark on a 1,300 km gravel adventure around Berlin with the Brandenburg Odyssey, and learn about a small route hiccup.
“Soil Moisture Viewer” from the German Weather Service for nerdy insights.
Xdebug Helper by JetBrains helps triggering the debugger in times of Manifest V3.
CLI tool of the week: monolith – save complete web pages as a single HTML file.
Listened to this week: Lena Brysch, Hophiluck and MARIE CLAIRE.
#weekly #Woodworking #Wood #Turning #Woodturning #LLM #AI #IMDb #RatingGraph #SchittsCreek #Brandenburg #BrandenburgOdyssey #Gravel #Spreewald #Unterspreewald #DWD #Xdebug #PHP #SPX #monolith #wget #CLI #Techno
🪵 Discover a new tool for the woodturning lathe. 🧠…
Marcus JaschenWochenrückblick, Ausgabe 83 (2025-11)
Themen:
Ein neues und vielseitiges Werkzeug für die Drechselbank.
Programmierarbeit mit LLMs effizienter gestalten.
Rating Graph erschließt die Welt der Statistiken zu Serien und Filmen.
„Schitt's Creek” – ein Must-Watch.
Brandenburg Odyssee: 1.300 km Gravel rund um Berlin – inklusive eines kleinen Problems auf der Routenführung.
Der „Bodenfeuchte-Viewer“ des Deutschen Wetterdienstes.
Xdebug Helper ist jetzt dank JetBrains kompatibel zu Manifest V3.
monolith speichert komplette Webseiten als alleinstehende HTML-Dateien samt aller Assets.
In dieser Woche gehört: Lena Brysch, Hophiluck und MARIE CLAIRE.
#Wochenrückblick #Woodworking #Holz #Drechseln #LLM #Software #IMDb #RatingGraph #SchittsCreek #Brandenburg #BrandenburgOdyssee #Gravel #Spreewald #Unterspreewald #Bodenfeuchteviewer #DWD #Xdebug #PHP #SPX #monolith #wget #CLI #Techno
🪵 Ein neues und vielseitiges Werkzeug für die Drechselbank. 🧠…
Marcus JaschenDownload speed slow in terminal but normal on the browser (same connection) #commandline #apt #networkmanager #wget #downloadspeed
Things like wget and apt have a download speed around…
Ask UbuntuHTTrack - Der Website Downloader
In diesem Tutorial zeige ich dir, wie du ganze Websites mit HTTrack für den Offline-Zugriff speichern kannst. Egal, ob für die eigene Sicherung oder einfach zum Stöbern ohne Internet – ich zeige dir Schritt für Schritt, wie es funktioniert.
In diesem Tutorial zeige ich dir, wie du ganze Websites…
GNU/Linux.ch@bagder Problem with that is (besides occasional bugfixes), most people including myself would see #curl to be functionally complete and anything "nice to have" would be considered not worth the balooning in #complexity and #size.
I mean, does curl need to be able to do #BitTorrent (magnet:), #IPFS (ipfs://) or god forbid #blockchain (i.e. #EVM) support?
Do you really want to integrate @torproject / #Tor support natively into curl when using #HTTP (localhost:8118) and #SOCKS5 (localhost:9050) #proxy allows for the same and doesn't necessitate having to handle and ingest Tor arguments as well??
In fact if #toybox didn't have a #wget implementation that I could use for OS/1337 I would've merely chosen tiny-curl -o
as a global alias or if #tinycurl wasn't an option, curl -o
instead.
Maybe someone who wants to have said functionality like tor
support built-in will go and IDK make i.e. #neocurl
or sth. along those lines or build something like #ethcurl
or #torcurl
or #ipfscurl
or whatever...
That being said I am glad curl
isn't solely maintained by you but has other contributors (give them a shoutout!) but I also am glad you maintain that vital software that most "#TechIlliterate #Normies" most likely never heard of but propably use on a daily basis as part of all the #tech they use to #consume media with...
I consider curl to be "the #vim of downloaders" (tho that's kinda insulting and limiting since curl
is more than just a downloader and more intuitive than vim
) with wget being "the #vi of downloaders" (tho wget
is even simpler to use than vi
)...
Either way, curl is awesome...
Якщо комусь колись треба було дізнатися розмір файлу не завантажуючи його то є варіант:
wget --spider 'посилання'
для цьоо ще можна зробити аліас, щоб не прописувати вручну:
alias spider='wget --spider'
а ще краще функцію в .bashrc
spider() {
wget --spider "$1" 2> >(grep Length)
}
Also: it must be really a gargantuan task to do a simple GET to fetch followers, right? Neither #curl, nor #wget can do that, right? Right? #fediverse #mastodon #mastoadmin #MastoLivre #webdev
Micah Cowan is the current maintainer of GNU Wget,…
daniel.haxx.seA veces uno necesita hacer una copia completa de un sitio web para acceso fuera de línea, o también para convertir un sitio dinámico a estático, especialmente cuando un cliente tiene un sitio web en una versión obsoleta de un manejador de contenido como WordPress o Joomla en una versión muy vieja y corriendo una versión de PHP vulnerable. Esto es especialmente útil cuando el sitio web no requiere ser modificado. Esto trae como beneficio un sitio más veloz y más seguro, ya que un sitio estático HTML se envía tal cual al navegador de las personas usuarias, sin tener que ejecutar código PHP potencialmente vulnerable.
Con la herramienta de línea de comandos wget
es posible de forma muy sencilla descargar un sitio web entero:
wget --mirror --convert-links --adjust-extension --page-requisites --no-parent http://example.org
Explicación de algunos de los argumentos:
--mirror
– Descarga recursivamente y hace otros ajustes relacionados, implica: -r -N -l inf --no-remove-listing
.--convert-links
– convierte enlaces y referencias en HTML como CSS, para poder ser vistas fuera de línea.--adjust-extension
– Ajusta la extensión de los objetos html o css según el tipo de datos--page-requisites
– Descarga objetos enlazados como hojas de estilo CSS e imágenes requeridos para ver el sitio fuera de línea.--no-parent
– No subir a directorios superiores en el URL, útil cuando se está haciendo espejo de un URL específico como http://example.org/sub-directorio/
Ejemplo de conversión de sitio WordPress en un sistema cPanel:
# Entrar como usuario a shell, para no ejecutar comandos como rootsu - user -s /bin/bash# Crear directorio de descargasmkdir download && cd downloadwget --mirror --convert-links --adjust-extension --page-requisites --no-parent http://example.org# Hacer un respaldo del directorio raíz del sitio webcd ..mv public_html public_html~# Colocar la copia descargada en líneamv download/example.org public_html# Probar sitio, ajustar detalles y opcionalmente borrar base de datos luego de hacer respaldo# Con esto el sitio dinámico ha sido convertido a estático.
Referencias:
GNU wget manual: https://www.gnu.org/software/wget/manual/wget.html
Make Offline Mirror of a Site using wget: https://www.guyrutenberg.com/2014/05/02/make-offline-mirror-of-a-site-using-wget/
https://www.impulsait.com/hacer-una-copia-o-espejo-de-un-sitio-usando-wget/
GNU Wget 1.24.5 Manual
www.gnu.org#DailyBloggingChallenge (364/365)
The 'Quick Start' section in the Readme sufficed for setting up.
The only thing that I had to change in the `./models/download-ggml-model.sh` script (1) is remove the option `--show-progress` on line 105. Seems like GNU Wget2 2.1.0 doesn't have that option.
Alternatively one can replace the option with
`--progress=bar --force-progress`
- 1: https://github.com/ggerganov/whisper.cpp/blob/master/models/download-ggml-model.sh
Port of OpenAI's Whisper model in C/C++. Contribute…
github.comFrage: Archive.org speichern
Von meinem Pech, dass onyxhosting.de seine Kunden aussperrt, hatte ich ja schon berichtet.
Das umfangreiche Stadtgeschichten-Projekt ist bei archive.org als Schnappschuss vorhanden:
https://web.archive.org/web/20240527114059/https://geschichten-aus-weissenhorn.de/
Bevor wir gar nichts mehr haben, würde ich mich da mit wget durchhangeln. Was wäre der geschickteste Befehl, damit möglich viel automatisch gespeichert wird?
Versucht habe ich jetzt:
wget -r -np -k -p https://web.archive.org/web/20240527114059/https://geschichten-aus-weissenhorn.de/
@bortzmeyer @bagder it’s interesting point of view. I’d assume that #wget can cover all use cases curl provides for more than half of people who use curl.
I think people want to have wget alternative to not use two separate tools, but stay with #curl. They may want to have it due to less installed tools (in some containers or minimalistic VMs or some IoT, etc). Why would you suggest to use wget instead?