By starting the hash table at this size, the minimal object size (on
a 32 bit machine) is 89 * 2 * 4 bytes = 712 bytes (plus some overhead for headers, etc). This is an immense waste for objects with few properties.
I didn't intend this hash map to be used for object representations. I initially coded it so I could use it for the CFG implementation and further code along the way. It's kind of limiting to only have arrays, or JS "hash maps" that can only take integers and strings as indices. I thought a proper hash map (and hash set) would be useful down the road.
Table contraction should also be implemented by the remItem method.
For this it is necessary to give a minimum and maximum load factor.
Might be useful later, for long-lived hash maps.
Growing the table by a factor of 2 is too much. When a load factor
range is defined, it is possible to compute the new size so that it falls somewhere in the middle of the load factor range (more precisely the square root of the product of the minimum and maximum load factors). That way, it will take a substantial number of key inserts/deletes before the table needs to be resized.
I think doubling the size is pretty standard, it helps minimize the number of expansions needed when a table is growing rapidly. The expansion formula is also intended to keep the table size close to prime. However, I would indeed need a minimum load factor if I want compaction.
The table should only expand when a key is actually added and the
load factor maximim is exceeded. Currently, if a key is in the table and the load is at maximum, the table is resized anyway.
I'm not sure I get what you mean. The table is resized if adding a new key will make the load factor go past the maximum. I'm going under the assumption of keys only being added once.
- Maxime