Currently trying to extract and format data from PDFs using #python #PyMuPDF.
Initially used the `get_text(value)` method with the `"text"` value, only to learn that I could have potentially saved time directly using the `"html"` value, since I have been creating pattern matchers to format the text into #HTML.
After investigation, although the html option exists, the post processing is more strenuous than the initial approach.
My fascination with the `get_text(value)` method is that each value packages the data differently. Where as `"html"` puts the text in `<p><span>text</span></p>`, `"xhtml"` puts it instead in `<h1>text</h1>`.
Further while trying to extract and format data from PDFs using #python #PyMuPDF.
I was trying to create a perfect chain of functions that would format all the edge cases into the final desired #HTML format. This is where I quickly realized running every tweaked version of the functions on the 100 page PDF is quite time consuming.
Instead I can run it once and save the results in a #sqlite database. Then create #sql queries to do post processing on the edge cases while having a good enough way to observe the contents of each page over the pervious method of posting the output into the #terminal and scrolling to the desired page. And in the end, I am one step closer of having the data in a #csv file, which is easily exported with #Dbeaver.
@barefootstache Have you tried using pdftotext with layout mode and some regex? That worked for me with a 600 page database schema :)
@johnabs no haven't tried out many other libraries, which I am currently searching if there are better options, though at the same time I will need image extraction capabilities.
Have tried PyPDF though wasn't happy with the results.
@barefootstache pdftotext allows image extraction apparently :3
@johnabs I was looking at
@barefootstache Perfect, glad I could contribute a bit :3