Sujet : Re: terminal only for two weeks
De : not (at) *nospam* telling.you.invalid (Computer Nerd Kev)
Groupes : comp.miscDate : 26. Nov 2024, 22:52:52
Autres entêtes
Organisation : Ausics - https://newsgroups.ausics.net
Message-ID : <67464333@news.ausics.net>
References : 1 2 3 4 5
User-Agent : tin/2.0.1-20111224 ("Achenvoir") (UNIX) (Linux/2.4.31 (i586))
D <
nospam@example.net> wrote:
On Tue, 26 Nov 2024, yeti wrote:
D <nospam@example.net> wrote:
I have been thinking about moving the reading part of web browsing
into the terminal as well, but haven't found a browser I'm happy
with.
>
I use Elinks, Emacs/EWW and W3m, but none of them can replace the scary
fullfat browsers. They seem to just fit Smolweb stuff (FTP, Gemini,
Gopher and similar).
True.
I like seeing useful images, so prefer Dillo and Links (the latter
does support display via the framebuffer so you can run it
graphically without X).
Maybe it would be possible to write a kind of "pre-processor" that
formats web sites with a text based browser in mind?
>
Despite me finding this solution really scary, something like that
indeed exists:
>
<https://www.brow.sh/>
Ah yes... I've seen this before! I did drop it due to its dependency on
FF, but the concept is similar. My idea was to aggressively filter a web
page before passing it on to elinks or similar.
Perhaps rewriting it a bit in order to avoid the looooooong list of menu
options or links that always come up at the top of the page, before the
content of the page shows after a couple of page downs (this happens for
instance if I go to wikipedia).
Lucky if it's just a couple of page-downs, I can easily be
hammering the button on some insane pages where 10% is the actual
content and 90% is menu links. Often it's quicker to press End
and work up from the bottom, but many websites have a few pages of
junk at the bottom too now, so you have to hunt for the little
sliver of content in the middle.
Instead parsing it, and adding those links at the bottom, removing
javascript, and perhaps passing on only the text.
A similar approach is taken by frogfind.com, except rather than
parsing the links and putting them at the end, it detetes them,
which makes it impossible to navigate many websites. It does the
other things you mention, but the link rewriting would probably be
the hardest part to get right with a universal parser.
Site-specific front-ends are a simpler goal. This is a list of ones
that work in Dillo, and therefore without Javascript:
https://alex.envs.net/dillectory/Of course then you have the problem of them breaking as soon as the
target site/API changes or blocks them.
-- __ __#_ < |\| |< _#