Sujet : Re: Help with Streaming and Chunk Processing for Large JSON Data (60 GB) from Kenna API
De : olegsivokon (at) *nospam* gmail.com (Left Right)
Groupes : comp.lang.pythonDate : 30. Sep 2024, 09:41:44
Autres entêtes
Message-ID : <mailman.3.1727704162.3018.python-list@python.org>
References : 1 2 3 4
Whether and to what degree you can stream JSON depends on JSON
structure. In general, however, JSON cannot be streamed (but commonly
it can be).
Imagine a pathological case of this shape: 1... <60GB of digits>. This
is still a valid JSON (it doesn't have any limits on how many digits a
number can have). And you cannot parse this number in a streaming way
because in order to do that, you need to start with the least
significant digit.
Typically, however, JSON can be parsed incrementally. The format is
conceptually very simple to write a parser for. There are plenty of
parsers that do that, for example, this one:
https://pypi.org/project/json-stream/ . But, I'd encourage you to do
it yourself. It's fun, and the resulting parser should end up less
than some 50 LoC. Also, it allows you to closer incorporate your
desired output into your parser.
On Mon, Sep 30, 2024 at 8:44 AM Asif Ali Hirekumbi via Python-list
<
python-list@python.org> wrote:
>
Thanks Abdur Rahmaan.
I will give it a try !
>
Thanks
Asif
>
On Mon, Sep 30, 2024 at 11:19 AM Abdur-Rahmaan Janhangeer <
arj.python@gmail.com> wrote:
>
Idk if you tried Polars, but it seems to work well with JSON data
>
import polars as pl
pl.read_json("file.json")
>
Kind Regards,
>
Abdur-Rahmaan Janhangeer
about <https://compileralchemy.github.io/> | blog
<https://www.pythonkitchen.com>
github <https://github.com/Abdur-RahmaanJ>
Mauritius
>
>
On Mon, Sep 30, 2024 at 8:00 AM Asif Ali Hirekumbi via Python-list <
python-list@python.org> wrote:
>
Dear Python Experts,
>
I am working with the Kenna Application's API to retrieve vulnerability
data. The API endpoint provides a single, massive JSON file in gzip
format,
approximately 60 GB in size. Handling such a large dataset in one go is
proving to be quite challenging, especially in terms of memory management.
>
I am looking for guidance on how to efficiently stream this data and
process it in chunks using Python. Specifically, I am wondering if there’s
a way to use the requests library or any other libraries that would allow
us to pull data from the API endpoint in a memory-efficient manner.
>
Here are the relevant API endpoints from Kenna:
>
- Kenna API Documentation
<https://apidocs.kennasecurity.com/reference/welcome>
- Kenna Vulnerabilities Export
<https://apidocs.kennasecurity.com/reference/retrieve-data-export>
>
If anyone has experience with similar use cases or can offer any advice,
it
would be greatly appreciated.
>
Thank you in advance for your help!
>
Best regards
Asif Ali
--
https://mail.python.org/mailman/listinfo/python-list
>
>
--
https://mail.python.org/mailman/listinfo/python-list