There is a bug in the algorithm for deleting a key from a hash map (method remItem).
Imagine the scenario where the keys are added in turn A, B, C where A and C hash to the index 1 and B hashes to the index 2. The array containing the elements will be:
0 1 2 3 4 +---+---+---+---+---+ | | A | B | C | | +---+---+---+---+---+
If the key A is removed, it will be replaced by "freeKey" and B and C will not move. But if C is then looked up in the hash map it will not be found.
You really need 2 code:
freeKey = this element has never contained a key (since the hash table was created or resized)
deletedKey = this element used to contain a key which has since been deleted.
It is possible to do away with the "deletedKey" code, but it requires (in general) rehashing the table, which is a very high price to pay.
Check the Gambit sources for the algorithms in Scheme and C, or an algorithms textbook.
Marc
Afficher les réponses par date
Thanks for pointing this out. I made an attempt at a bug fix without resorting to a deletedKey and wrote a better unit test. It seems to work properly now.
- Maxime
Marc Feeley wrote:
There is a bug in the algorithm for deleting a key from a hash map (method remItem).
Imagine the scenario where the keys are added in turn A, B, C where A and C hash to the index 1 and B hashes to the index 2. The array containing the elements will be:
0 1 2 3 4
+---+---+---+---+---+ | | A | B | C | | +---+---+---+---+---+
If the key A is removed, it will be replaced by "freeKey" and B and C will not move. But if C is then looked up in the hash map it will not be found.
You really need 2 code:
freeKey = this element has never contained a key (since the hash table was created or resized)
deletedKey = this element used to contain a key which has since been deleted.
It is possible to do away with the "deletedKey" code, but it requires (in general) rehashing the table, which is a very high price to pay.
Check the Gambit sources for the algorithms in Scheme and C, or an algorithms textbook.
Marc
Tachyon-list mailing list Tachyon-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/tachyon-list
On 2010-06-28, at 10:46 AM, Maxime Chevalier-Boisvert wrote:
Thanks for pointing this out. I made an attempt at a bug fix without resorting to a deletedKey and wrote a better unit test. It seems to work properly now.
Can you explain your new algorithm in simple terms? I see you don't use a "deletedKey" tag, so I don't see how it can work (i.e. I believe there is still a bug but I'm too lazy to examine your algo in detail).
Marc
Can you explain your new algorithm in simple terms? I see you don't
use a "deletedKey" tag, so I don't see how it can work (i.e. I believe there is still a bug but I'm too lazy to examine your algo in detail).
It works under the assumption that there is no key for deleted items, and that clusters of stored items are always contiguous.
When an item mapping to index K in the internal array is removed, items after it in its cluster may need to be moved. More specifically, if another item in the cluster maps to index K, it needs to be moved to the position of the removed item, so that there is no "hole" in the cluster. However, this poses a problem, because moving an item in the cluster to the "left" may create a new hole at the position of the item that was just moved, which could break the lookup of other items that map to the "left" or at the position of the moved item.
The algorithm I implemented scans, starting at the position where the removed item maps, until a free slot is encountered (until the end of the cluster). It keeps track of the position of the removed item (the "hole" we just created), and, for each item until the end of the cluster, moves it into the "hole" only if its key maps to the "left" of, or at the position of the hole. If an item is moved into the "hole", the position of the "hole" is updated, and the process keeps going until the end of the cluster. When the end of the cluster is reached, the hole is marked as being free.
All the algorithm really does is move items in a cluster closer to the position where their key maps.
My assumptions are that: 1. The algorithm can't move an item "before" the position where its key maps. Items remain at or after this position. 2. The algorithm can't make a cluster of values mapping to the same index non-contiguous. 3. If a hole was created in a cluster, then there can be no item after the hole whose key maps after the position of the hole (or before that position). Otherwise, it would have been moved into the hole.
I believe these are sufficient not to break the structure of the hash table's internal array, but feel free to tell me if my algorithm is broken.
- Maxime
It seems that some syntax errors are not detected by the parser. For example, a break statement outside of a loop/switch statement, or with an invalid label does not generate a syntax error. I would suggest that the parser should have an AST validation pass added to it, so that these kinds of errors can be detected, and adequately reported, before any transformations are applied to the AST.
Right now, I'm in a position where the parser can produce an AST that doesn't translate into valid IR. I would rather avoid having to clutter the AST->IR translation with validation and syntax error reporting code, and simply be able to assume that the AST I get as input is syntactically valid.
- Maxime
On 2010-07-01, at 10:59 AM, chevalma@iro.umontreal.ca wrote:
It seems that some syntax errors are not detected by the parser. For example, a break statement outside of a loop/switch statement, or with an invalid label does not generate a syntax error. I would suggest that the parser should have an AST validation pass added to it, so that these kinds of errors can be detected, and adequately reported, before any transformations are applied to the AST.
Right now, I'm in a position where the parser can produce an AST that doesn't translate into valid IR. I would rather avoid having to clutter the AST->IR translation with validation and syntax error reporting code, and simply be able to assume that the AST I get as input is syntactically valid.
Absolutely! There are currently no checks in the parser beyond the grammatical checks. It allows things like:
1 = 2+3;
because that is allowed by the ECMAScript grammar.
The best place to add such checks is probably in the AST normalizer (ast-passes.js) because it already traverses the AST for accumulating various informations on the AST.
Who can add this task to their workload?
Marc
If it's actually part of the grammar then that's a bit tricky. Should we report those errors right after parsing, or actually throw an exception only if/when the erroneous code is executed, in which case some error code has to be generated for these erroneous statements/expressions.
- Maxime
On 2010-07-01, at 10:59 AM, chevalma@iro.umontreal.ca wrote:
It seems that some syntax errors are not detected by the parser. For example, a break statement outside of a loop/switch statement, or with an invalid label does not generate a syntax error. I would suggest that the parser should have an AST validation pass added to it, so that these kinds of errors can be detected, and adequately reported, before any transformations are applied to the AST.
Right now, I'm in a position where the parser can produce an AST that doesn't translate into valid IR. I would rather avoid having to clutter the AST->IR translation with validation and syntax error reporting code, and simply be able to assume that the AST I get as input is syntactically valid.
Absolutely! There are currently no checks in the parser beyond the grammatical checks. It allows things like:
1 = 2+3;
because that is allowed by the ECMAScript grammar.
The best place to add such checks is probably in the AST normalizer (ast-passes.js) because it already traverses the AST for accumulating various informations on the AST.
Who can add this task to their workload?
Marc
Tachyon-list mailing list Tachyon-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/tachyon-list
There are other problems with the hash map implementation:
1)
// Initial hash map size HASH_MAP_INIT_SIZE = 89;
By starting the hash table at this size, the minimal object size (on a 32 bit machine) is 89 * 2 * 4 bytes = 712 bytes (plus some overhead for headers, etc). This is an immense waste for objects with few properties. For reference, by default the Gambit hash tables are allocated with space for 5 elements. As more keys are added the hash table will grow, and as keys are deleted the hash table will shrink to maintain the load factor within a certain range. The current load factor should count deleted keys as part of the "load". That way, a table that gets filled with deleted keys will eventually be rehashed and the deleted keys will be purged from the table.
2) Table contraction should also be implemented by the remItem method. For this it is necessary to give a minimum and maximum load factor.
3) Growing the table by a factor of 2 is too much. When a load factor range is defined, it is possible to compute the new size so that it falls somewhere in the middle of the load factor range (more precisely the square root of the product of the minimum and maximum load factors). That way, it will take a substantial number of key inserts/deletes before the table needs to be resized.
4) The table should only expand when a key is actually added and the load factor maximim is exceeded. Currently, if a key is in the table and the load is at maximum, the table is resized anyway.
Marc
On 2010-06-27, at 11:54 PM, Marc Feeley wrote:
There is a bug in the algorithm for deleting a key from a hash map (method remItem).
Imagine the scenario where the keys are added in turn A, B, C where A and C hash to the index 1 and B hashes to the index 2. The array containing the elements will be:
0 1 2 3 4
+---+---+---+---+---+ | | A | B | C | | +---+---+---+---+---+
If the key A is removed, it will be replaced by "freeKey" and B and C will not move. But if C is then looked up in the hash map it will not be found.
You really need 2 code:
freeKey = this element has never contained a key (since the hash table was created or resized)
deletedKey = this element used to contain a key which has since been deleted.
It is possible to do away with the "deletedKey" code, but it requires (in general) rehashing the table, which is a very high price to pay.
Check the Gambit sources for the algorithms in Scheme and C, or an algorithms textbook.
Marc
Tachyon-list mailing list Tachyon-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/tachyon-list
By starting the hash table at this size, the minimal object size (on
a 32 bit machine) is 89 * 2 * 4 bytes = 712 bytes (plus some overhead for headers, etc). This is an immense waste for objects with few properties.
I didn't intend this hash map to be used for object representations. I initially coded it so I could use it for the CFG implementation and further code along the way. It's kind of limiting to only have arrays, or JS "hash maps" that can only take integers and strings as indices. I thought a proper hash map (and hash set) would be useful down the road.
Table contraction should also be implemented by the remItem method.
For this it is necessary to give a minimum and maximum load factor.
Might be useful later, for long-lived hash maps.
Growing the table by a factor of 2 is too much. When a load factor
range is defined, it is possible to compute the new size so that it falls somewhere in the middle of the load factor range (more precisely the square root of the product of the minimum and maximum load factors). That way, it will take a substantial number of key inserts/deletes before the table needs to be resized.
I think doubling the size is pretty standard, it helps minimize the number of expansions needed when a table is growing rapidly. The expansion formula is also intended to keep the table size close to prime. However, I would indeed need a minimum load factor if I want compaction.
The table should only expand when a key is actually added and the
load factor maximim is exceeded. Currently, if a key is in the table and the load is at maximum, the table is resized anyway.
I'm not sure I get what you mean. The table is resized if adding a new key will make the load factor go past the maximum. I'm going under the assumption of keys only being added once.
- Maxime