-
-
Notifications
You must be signed in to change notification settings - Fork 6.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reading concatenated json's from an istream #210
Comments
Is there anything between the json objects besides whitespace, or maybe newlines only between the objects but nowhere else? Since you say pretty printed, I'm guessing the newlines is no. Do you have a small sample you can share? |
... would be an example. basically I am trying to slurp in data from a process which is periodically appending thanks! |
One thing you could try is iterate over the characters and count how many { you have and every time you encounter a } subtract from that counter. Once the counter is 0 you know the end of the first json object. |
arg, of course. that sounds good. Thanks for the help! |
No prob, have a nice weekend! |
Hi @lilinjn, does the idea of @whackashoe help you? The format you are using reminds me of JSON Lines, but your input seems to be pretty-printed. I have currently no plans supporting such an extension. |
Yes -- I think it's simpler to roll my own reader based on @whackashoe's On Sat, Feb 20, 2016 at 5:37 AM, Niels [email protected] wrote:
|
Ok, then thanks @whackashoe for helping! |
Hi,
Your library is great (and because I am lazy about being lazy, I've typed nlohmann::json so many times that I feel like I practically know you)
I have a file of multiple concatenated, pretty printed json objects that I want to parse. Is there a way with the current library to split a string in individual json which than can be dealt with as usual (of course, any other reasonable way of skinning the same cat would be appreciated.) Otherwise, I've seen json PCRE regexes that might also do the trick. Do think that is the way to go?
Thanks!
The text was updated successfully, but these errors were encountered: