Heh, yea, as a point of note, if this is implemented, we'll definately want to do a full status window for this upgrade... I just timed it out on a 4MB database file locally, and while the basic operation of the changes could still be improved upon (I did the timed tests in Python on Win32, XP, fairly high-end system). Assuming that in the upgrade process we did a VACUUM statement to clean out all the excess left behind by the reworking of the database, I'm looking at 2.6 seconds average for a 5MB file.
Steps: RENAME, CREATE, CREATE, DROP INDEX, DROP INDEX, DROP TABLE, CREATE INDEX, CREATE INDEX, REINDEX, VACUUM.
Raw data from my timed tests is available here:http://www.tyrannobyte.com/vienna-rss/d ... t-data.txt
The data is outputted at semi-colon seperated data (not comma-seperated, due to SQL having commas in it)
Any ideas on how this conceptual change can be improved upon is appreciated.
- REINDEX was executed as with the changes, I'd rather be safe than sorry. Being that this accounts for only approximately 5% of the run time, it isn't too bad of a time-consumer. Possibly may be omitted.
- VACUUM was executed due to necessity. At least in my SQLite3 database file, there was a fair chunk of un-unsed space just sitting in the file being used. This trimmed about 1 MB off my output file size. Results may vary. Perhaps offer the VACUUM service as an optional step that users may elect to have run.
Edit: Updated link.