Help with Streaming and Chunk Processing for Large JSON Data (60 GB) from Kenna API

Liste des GroupesRevenir à cl python 
Sujet : Help with Streaming and Chunk Processing for Large JSON Data (60 GB) from Kenna API
De : asifali.ha (at) *nospam* gmail.com (Asif Ali Hirekumbi)
Groupes : comp.lang.python
Date : 27. Sep 2024, 08:17:12
Autres entêtes
Message-ID : <mailman.0.1727668850.3018.python-list@python.org>
References : 1
Dear Python Experts,

I am working with the Kenna Application's API to retrieve vulnerability
data. The API endpoint provides a single, massive JSON file in gzip format,
approximately 60 GB in size. Handling such a large dataset in one go is
proving to be quite challenging, especially in terms of memory management.

I am looking for guidance on how to efficiently stream this data and
process it in chunks using Python. Specifically, I am wondering if there’s
a way to use the requests library or any other libraries that would allow
us to pull data from the API endpoint in a memory-efficient manner.

Here are the relevant API endpoints from Kenna:

   - Kenna API Documentation
   <https://apidocs.kennasecurity.com/reference/welcome>
   - Kenna Vulnerabilities Export
   <https://apidocs.kennasecurity.com/reference/retrieve-data-export>

If anyone has experience with similar use cases or can offer any advice, it
would be greatly appreciated.

Thank you in advance for your help!

Best regards
Asif Ali

Date Sujet#  Auteur
27 Sep 24 o Help with Streaming and Chunk Processing for Large JSON Data (60 GB) from Kenna API1Asif Ali Hirekumbi

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal