| Menu | JAQForum Ver 19.10.27 |
Forum Index : Microcontroller and PC projects : PicoDB
"Vibe coding", a fun activity for a long-retired, increasingly mobility-constrained programmer analyst. Following the 4-day project with Claude AI to build a MMBasic database retrieval system, I was not happy that it was so tightly tied to the actual data. I started over with ChatGPT-5.2 with a JSON structure, and spent 2 days on it. I found json to be too bulky for me to like, and the experience with ChatGPT was terrible--many instances of "Page not responding: Wait/Close" with 10-minute waits. Chat said this was normal with long sessions, but that's not what I experience with Claude or Gemini. So I switched to fixed-length records with Gemini, and had a real 5-day grind. Recently read my favorite AI substacker say that 2026 will become "all day, every day" for dedicated users, and that's what it was for me. The result is a generic, multi-table, multi-database relational retrieval system (updates to come). It's 1700 lines long, with far more features than the 2100-line version that Claude came up with. I've implemented the same 500-record fictional person database as with Claude (2 tables), and part of Microsoft's test Northwind database (4 table). It can be broken into the a library section and a user section. The stub of the user section is less than 100 lines long and takes less than 1% of program memory. The remained is loaded with LIBRARY LOAD. Here is how Gemini has documented it (I'm not sure how the formatting will translate here). When I get update access, I will enter this on fruitoftheshed. Project Name: PicoDB - Relational Database Engine for MMBasic PicoDB is a lightweight, fully relational database engine written in MMBasic. It is designed for the RP2350 Pico2 microcontroller (PicoMite) where RAM is somewhat scarce but flash SD card storage is plentiful. It supports both a Native Command Mode and a subset of SQL, allowing for complex data querying, filtering, and reporting on hardware with limited resources. Key Features Dual Mode: Interact via a CLI using Native commands (pipe/unix style) or standard SQL (SELECT, DELETE). Import Wizard: Built-in MAKE-DB command converts standard CSV files into fixed-width binary tables and automatically generates the schema. Relational Engine: Supports Implicit and Explicit Joins across up to 5 open tables simultaneously. Indexing: B-Tree style indexing implementation for instant lookups (O(log n) vs O(n)). Aggregates: Built-in support for COUNT, SUM, AVG, MIN, MAX. Context Switching: Instantly switch databases using USE-DB without restarting the program. 1. Installation 1. Code: Upload db.bas to your device. 2. Help Files: Create two text files on your storage root (SD or Flash) named native.hlp and sql.hlp. (Content provided below). 3. Run: RUN "db.bas" 2. Quick Start (The "30 Second" Tutorial) Step 1: Import Data You don't need to write schema files manually. Just upload your CSV files (with header lines) and run the import wizard. DB:us500> make-db mydata employees.csv,customers.csv This analyzes the CSVs, determines field types (String/Int/Float), creates the mydata.def schema, and converts data to fixed-width binary .dat files. Step 2: Load the Database DB:us500> use-db mydata Switching to database: mydata Database 'mydata' Loaded. 2 Tables, 15 Fields. Step 3: Query (SQL Mode) SQL DB:mydata> MODE SQL (switch back with MODE NATIVE) DB:mydata> SELECT first_name, job_title FROM employees WHERE salary > 50000 ORDER BY last_name 3. Native Command Mode The native mode is optimized for minimal typing and "pipe-like" logic. Syntax: [filter] [actions...] Operators: =, <>, >, <, >=, <= (Numeric and String) ~: Case-insensitive "Contains" (also " LIKE " translates to "~") |: OR operator (city="London" | city="Paris") &: AND operator (Implicit, but can be explicit) 4. SQL Mode (SQLite V2.8 Subset) For users comfortable with standard database syntax. Supported Commands: SELECT [fields] FROM [table] WHERE [condition] ORDER BY [field] LIMIT [n] DELETE FROM [table] WHERE [condition] UNDELETE FROM [table] WHERE [condition] Aggregates: COUNT, SUM, AVG, MIN, MAX Examples: SQL SELECT count(ID) FROM users WHERE active = 'Yes' DELETE FROM logs WHERE date < '2023-01-01' SELECT * FROM sales WHERE region = 'North' ORDER BY amount DESC 5. Technical Specifications Max Tables Open: 5 simultaneous tables. Max Record Size: Configurable (default 256 bytes per row). File Handles Used: #1 - #5: Data Tables (Runtime) #6 - #7: Indexing (Read/Write) (uses MMBasic Structures) #8: Utilities (Help/Schema Loader) #9 - #10: Import Wizard (CSV processing) 6. Appendix: Help Files Save these text files to your storage to enable the built-in help system. native.hlp sql.hlp db.zip ~ Edited 2026-01-19 14:58 by lizby PicoMite, Armmite F4, SensorKits, MMBasic Hardware, Games, etc. on FOTS |
||||||
Now you need COMMITs and ROLLBACKs...locks...views... it is a long slippery slope ![]() |
||||||
Won't be looking for anything past SQLite V2.8, and only a subset of that. |
||||||
Very cool! |
||||||
Many updates, including single and batch record insertions, UPDATE ... WHERE, and CALLBACK to user code. Code is now 2,261 lines long I think this is a quite functional database manager now. The bulk of the code can be loaded into the library, leaving only about 200 lines of stub for a main program to which the user can add essentially anything as user code. Simple queries on indexed terms are quite fast--tens of milliseconds. Here is the native language help text: Fuller PDF documentation attached. If anyone is interested in doing anything with it, I'll provide the code and would be happy to work to make it suitable for real projects. PicoDB.pdf |
||||||
Arithmetic added today (UPDATE jobsort SET age = age + 1 WHERE occupation = student) and GROUP BY (COUNT occupation GROUP BY occupation). Here's a new test suit, with 46 commands: And the results of running: BATCH test.bat This vibe coding is fun, although it took me/us (but mostly me) a number of hours to debug several problems in "UPDATE jobsort SET age = age + 1 WHERE occupation = student". Multiple logic errors. Gemini kept inventing solutions that weren't and speculating about "silent" errors. TRACE LIST provided the final clues. So there's still a use for my 57 years of coding and debugging practice. Some of these tests didn't SHOW the data expected, but that's a debugging issue for tomorrow. ~ Edited 2026-01-23 11:32 by lizby |
||||||
Where can I download the employees.csv and customers.csv files? Thank you.....Brian |
||||||
I don't remember exactly where I got the NorthWinds CSV files--I did a bit of scratching around--but perplexity says: |
||||||
Thank you @lizby and perplexity ....Brian |
||||||
I had a 2-week gap where a physical world construction project got in the way, but back at it yesterday and today. For now, I am content with where it is. 1. Database Setup & Management These commands handle the creation and loading of database environments. make-db <dbname> <csv_list> Description: Converts one or more CSV files into the engine's binary format (.dat) and creates the schema definition (.def). Automatically injects a "Soft Delete" flag (delFlag). Example: make-db us500 employees.csv, jobs.csv use-db <dbname> Description: Loads a database environment (tables and schema). Example: use-db us500 Relational Joins (Schema Configuration) Description: PicoDB supports dynamic, memory-efficient relational joins. To link two tables, open the generated .def file and add a LINK directive at the bottom specifying the child table and the foreign key field. Syntax: LINK, <child_table>, <foreign_key> Example: LINK, jobsort, id_ref ruler <filename> Description: Displays the first record of a file with a visual ruler. Useful for calculating fixed-width field positions manually. Example: ruler employees.csv HELP / ? Description: Displays the built-in help screen containing a quick reference guide to all native commands. EXIT Description: Closes all files and terminates the engine. 2. Data Retrieval (Native Mode) Syntax: [filter] [ORDER BY field [DESC]] [SHOW fields | SHOW ALL | FORMAT name] [LIMIT n] Queries are constructed by chaining clauses. If no command is recognized, the engine treats the input as a Filter. Clauses Filter Condition: Standard comparison operators (=, >, <, >=, <=, <>). Strings with spaces must be quoted. Queries can span multiple joined tables seamlessly. Like Operator: Use ~ for wildcard searches (% = multi-char, _ = single-char). Compound Logic: Use & (AND), | (OR), and parentheses () for complex queries. SHOW <fields>: Selects specific columns to display (comma-separated). SHOW ALL: Dumps the raw disk buffer to the screen for lightning-fast, perfectly aligned column printing of all fields across all joined tables. FORMAT <name>: Applies a saved view (see Tools section). ORDER BY <field> [DESC]: Sorts the results. Default is Ascending. LIMIT <n>: Restricts the number of results returned. TXT <file> / CSV <file>: Exports the results to a text or CSV file on the SD card. Examples Simple Search: zip="90210" Cross-Table Query: state="TX" & occupation="Engineer" Wildcard: title ~ "Eng%" (Matches Engineer, English, etc.) Export: state="NY" CSV ny_export.csv Fuller documentation attached. > memory Program: 78K (23%) Program (2554 lines) 250K (77%) Free Saved Variables: 16K (100%) Free RAM: 0K ( 0%) 0 Variables 4K ( 1%) General 371K (99%) Free > files A:/ <DIR> . <DIR> .. 00:00 01-01-2024 4 bootcount 00:14 01-01-2024 148 copyfile.bas 00:09 01-01-2024 86714 d6b5c.bas 01:09 01-01-2024 87879 d6b5d.bas 02:18 01-01-2024 87313 d7.bas 02:19 01-01-2024 22000 jobsort.dat 00:16 01-01-2024 22000 jobsort.good 02:19 01-01-2024 8000 jobsort_age.ndx 02:19 01-01-2024 8000 jobsort_id_ref.lnk 02:19 01-01-2024 20000 jobsort_occupation.ndx 02:19 01-01-2024 8000 jobsort_salary.ndx 04:05 01-01-2024 2315 native.hlp 00:50 01-01-2024 50 occupation.fmt 02:19 01-01-2024 50 person.fmt 00:21 01-01-2024 706 sql.hlp 00:46 01-01-2024 3639 test.bat 00:49 01-01-2024 45 us500-default.fmt 02:19 01-01-2024 107642 us500.dat 01:20 01-01-2024 1298 us500.def 00:14 01-01-2024 107000 us500.good 02:19 01-01-2024 20120 us500_city.ndx 02:19 01-01-2024 20120 us500_state.ndx 2 directories, 22 files, 13778944 bytes free > On the WeAct RP2350B module (no PSRAM yet--on order), 77% of program space free, and 13 megabytes on drive A:. Code is 2554 lines. Development with Gemini is still smooth, but debugging can be complicated if it doesn't see where the problem is. 45 tests run with the BATCH command in 78 seconds (CPUSPEED 360000). PicoDB.pdf PicoDB.zip |
||||||
The latest library.bas for PicoDB, with the db .hlp and .doc files and a user-code stub. See PicoRR for "Vibe Virtual Model Railroad" example. PicoDB_lib.zip |
||||||
A number of bug fixes and enhancements. Tests in the text.bat batch file runner extended from 46 to 56. db.zip library.bas is the library file db.bas is the console file for querying the database From the console prompt, "batch test.bas" will run the test suite. |
||||||
I guess I don't understand the setup. I downloaded db.zip, extracted it and placed the files in a folder on my PicoMite's SD card. I changed to that directory, loaded "library.bas" and did a library save command When I run "db.bas" I get an error: Initializing Database Engine... B:/PicoDB/db.bas [LIBRARY] OPEN dbName$ + ".def" FOR Input AS #8 Error : Could not find the file |
||||||
|
||||||
Thanks. Now I get a little further. It creates the database, but when I run test.bat if fails on Step 4 BATCH> # Test 4: OR query BATCH> city=Abilene | city=Fresno [LIBRARY] REDIM DB_JoinResAlpha(MAX_RECS) Error : Not enough Heap memory I'm running PicoMite VGA 6.02 on a Raspberry Pi Pico 2 |
||||||
This is really an issue that I ultimately solved only with PSRAM. What PicoMite are you running on? The us500 database has 500 records. A few are added by test.bat. Line 124 (or thereabouts) is Dim INTEGER MAX_RECS = 600 You might be able to reduce that to, say, 510 and succeed. |
||||||
I'm running PicoMite VGA on a Raspberry PI Pico 2, so I don't have any PSRAM. |
||||||
Mark--I hadn't used a VGA version for a while--it's been running fine on Mick's PCB for the WeAct RP2350B 48-pin board with Adafruit DVI module and 8MB PSRAM. I loaded onto Mick's PicoStick VGA board with a Pico2, and got the same error you did. I've edited the test.bat file to comment out the 2 tests which fail on boards with no PSRAM: #4 and #49. If you find that any other tests fail, please let me know and comment them out with "#" and try again. The library has slight changes, so do LIBRARY DELETE and LIBRARY SAVE. I've added a file which should have been included before: us500-default.fmt, which provide a default format for the output. db.zip |
||||||
| The Back Shed's forum code is written, and hosted, in Australia. |