🤖 New Script – AI Automated Content Audit #1983
+102
−0
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
To efficiently work through the large amount of content on the Tina docs pages, I've added a Python3 content audit script that uses local AI to analyze each page with a variable prompt, and I've included a readme file for usage.
The implementation leverages ollama as the local LLM server of choice – making an HTTP request for each separate file.
The URL can be swapped out and payload modified to use other local LLM servers, or other APIs that may have better results or a larger parameter size.
The script generates two files, one that contains a list of all discovered markdown content.
The other,
auditor-responses.md
contains each file and marks it as either "clean" or with the prompt response text.To get the filter effect (marking files as clean), the prompt needs to specify that it should start with either yes or no.
The response file is generated with markdown syntax.
General Contributing:
All New Content Submissions: (To be confirmed by reviewer)