Note that I wrote on 2007/11/02:
I'll now rather invest more time to get things right (the way I want them) than create a complete framework quickly.
Joe Hosteny wrote:
I wanted to put a feeler out to see if anyone in the Scheme community was working on these things, or would like to.
Re: was working: yes, in the sense of having put thought into it (pondered design choices) and doing experiments with creating records that can be extended, with creating shared lowlevel hashtables in scheme versus C code, starting to create a POSIX interface for the purpose of using mmap, setuid, etc.
Re: would like to: yes, although there is also module systems stuff I'd like to sort out first, and being busy by doing non-Scheme work atm.
I haven't really dug into a lot of examples with Mnesia, but my understanding is that these are some of the main features.
- Embedded
Yes, not some binding to some sql database... (although I'm not really in the "in-memory-database" camp, and there's no issue running the database in different processes than the processes making up your applications--whether "process" means Unix (or, huh, that other OS) processes or just Gambit threads is open, although once you start using posix functionality directly there's not so much of a fit for Gambit threads for it (not that that's a bad thing, doing the split of real multiprocessing at the storage boundary is probably a pretty good decision)).
- Allows for replication
Sure, and not only unidirectional like most db systems, but allowing to handle split-brain situations that happened to your servers. Means offering merging capabilities (maybe coded by users so that it's appropriate for their data structure).
or distribution of tables
Yes, have data in different places (processes with their files) owned by different users (unix users, or, in the case of Erlang maybe it's just processes (they have that "must know uuid of a process to talk to it" security)).
- Transactions (I'm not sure about the full capabilities of these)
Of course. I intend to handle the data purely functional, meaning, no hash tables (except for caching purposes), only trees. Database storage files are append-only, stale data is pruned by garbage collection passes, thanks to handling purely functional data, the GC can run in constant memory (in the sense that a couple of tens of MB are enough for garbage collecting hundreds of gigabytes), and thanks to fast linear disk access it's efficient.
- Native data structure storage
Sure. Well, of course the bytes in scheme objects are not stored 1:1 in the database since pointers have to be turned into object id's, strings can be stored more compactly (and on top of that you want to append checksums and maybe compress blocks of the database) etc., so there's a transformation layer involved (but pretty much straight-forward).
- Live backups
Sure, through replication, or just copying the storage files (while they are being written to their end..). (Nothing against Linux LVM.)
- Live schema upgrades
Depends on what you mean with "schema". You say you want to store native objects. Since Scheme is dynamically typed (and the database should thus be as well), you can always modify a type and continue saving it into the same tree ("table") where the old types have gone. This means, you've got the old "schema" (or types) for old objects and the new one for new objects. (Of course your code will have to be able to handle both, unless the code is bound to the data by means of closures(*).) Now of course you could hook a cleanup function into the database garbage collector so that upon the next collection old objects are converted to new ones (or initiate an immediate collection). One could automatically create conversion functions (yes: you'll want to write the record definition facility so that it can do that).
(*) I'm not sure: do we want to be able to save closures and continuations into the DB? (I mean, not by using the Gambit serializer, but by inspecting them and save every contained object as a database object; so that data sharing is preserved, the same object can be referenced by a closure or by some other means. I'm not sure where mutation is becoming a problem; you could give mutated objects new object id's in the database, but then they are not eq? anymore and subsequent modifications are not affecting the original one. Is there a way to solve that from the purely functional language world?)
I don't really have a background in databases, but I do in filesystems and journaling.
My background in databases is using MySQL, Postgres, storing objects as XML files, dumping objects as serialized blobs, etc., and maybe knowing GIT also counts as some kind of database knowledge (ok, I've thought through whether I'd want sha1 sums as object id's and choose "no"). My background in journaling or rather replication is trying to manage synchronized live web servers. I'm a programmer, not a "database" guru. I know that "databases" don't fit my programming projects :)
BTW re journaling, I'm not sure how much that's just moot once you have a purely functional storage (ok, when you're writing to several places (different processes) and group those changes together into single commits, then you need a journal of those; but then that's a distributed computing problem and not a storage manager problem ;)).
I have to get chjmodule working...
Christian.