Liste des Groupes | Revenir à c misc |
D <nospam@example.net> wrote:For some reason, I never managed to get the framebuffer to work. Have no idea why. =( I would like to get it to work though. Dillo was a good tip! I did play with it for a bit, but then forgot about it. Maybe the reason was a lack of tabs or buffers. I think links or maybe it was elinks, had a way for me to replicate tabs or vi buffers in the browser. It was super convenient!On Tue, 26 Nov 2024, yeti wrote:>D <nospam@example.net> wrote:>I have been thinking about moving the reading part of web browsing>
into the terminal as well, but haven't found a browser I'm happy
with.
I use Elinks, Emacs/EWW and W3m, but none of them can replace the scary
fullfat browsers. They seem to just fit Smolweb stuff (FTP, Gemini,
Gopher and similar).
True.
I like seeing useful images, so prefer Dillo and Links (the latter
does support display via the framebuffer so you can run it
graphically without X).
I know... as a perfectionist this does not go down well with me. ;)>>Maybe it would be possible to write a kind of "pre-processor" that>
formats web sites with a text based browser in mind?
Despite me finding this solution really scary, something like that
indeed exists:
>
<https://www.brow.sh/>
Ah yes... I've seen this before! I did drop it due to its dependency on
FF, but the concept is similar. My idea was to aggressively filter a web
page before passing it on to elinks or similar.
>
Perhaps rewriting it a bit in order to avoid the looooooong list of menu
options or links that always come up at the top of the page, before the
content of the page shows after a couple of page downs (this happens for
instance if I go to wikipedia).
Lucky if it's just a couple of page-downs, I can easily be
hammering the button on some insane pages where 10% is the actual
content and 90% is menu links. Often it's quicker to press End
and work up from the bottom, but many websites have a few pages of
junk at the bottom too now, so you have to hunt for the little
sliver of content in the middle.
Did not know about frogfind! This could be a great start to improve the readability! In my home brew rss2email script, I automatically create archive.is links, so that when I want to read articles behind paywalls, archive.is is already built in.Instead parsing it, and adding those links at the bottom, removing>
javascript, and perhaps passing on only the text.
A similar approach is taken by frogfind.com, except rather than
parsing the links and putting them at the end, it detetes them,
which makes it impossible to navigate many websites. It does the
other things you mention, but the link rewriting would probably be
the hardest part to get right with a universal parser.
Site-specific front-ends are a simpler goal. This is a list of onesThis is the truth!
that work in Dillo, and therefore without Javascript:
https://alex.envs.net/dillectory/
>
Of course then you have the problem of them breaking as soon as the
target site/API changes or blocks them.
Les messages affichés proviennent d'usenet.