[gambit-list] Your feedback would be much appreciated re: Proposal for enabling IO errors to be reported through returning of a custom value instead of by throwing exception, through DSL with exports: ##io-error-behavior param, ##default-io-error-behavior unique value, ##last-io-error param.

Mikael mikael.rcv at gmail.com
Sun Mar 10 22:47:20 EDT 2013


Dimitris,

More re f):

Wait, since internally the respective port does all the decoding logics
anyhow, then the fact that Gambit internally signals IO events using
mutex-based condvars should *not* affect the potential IO speed achievable,
as the port internally could do read-ahead at the level of u8vector and the
threadsafety-condvar stuff could work at that level;

Then, the read-u8:s could be made thread-unsafely without any need for
condvars ever.

At least this understanding makes clear that independent of how Gambit's IO
works internally, read-u8 and friends *can* be invoked completely without
need by far for locking on a per-invocation basis, but rather once per
block, so that would be once per 10240 calls or alike - very fair!. Though,

Only in case this could be done in such a way that it not interferes with
hybrid binary & char access - that would be the only thing such a
read-ahead mechanism could interfere with, wouldn't it? - it would be a
'complete' solution.

If you have any thoughts feel free to share.


Anyhow, for me the prios now are what's addressed by the parent email in
this thread and points a) & b), as these are the things essential for
driving realworld HTTP and HTTPS and alike reliably and cleanly.

Next thing after this would probably be c) & d) as to make the IO system
abstract for this kind of uses. (As in, ability to implement protocol
handlers for SSL etc. as Gambit ports in a way that delivers well.)

Brgds

2013/3/11 Mikael

> Hi Dimitris,
>
> Thank you for your response.
>
> Feel most free to send it to the Gambit ML too, as to encourage further
> conversation on this topic.
>
>
> The IO performance can be boosted to really good performance by doing it
> via read/write-subu8vector only, but that has huge system-level limitations
> so even though for almost-all applications that can solve it, in the big
> picture it's not a solution.
>
> Re boosting byte-level access speeds, I remember I think Marc making a ML
> post where he inlined read-char's or read-u8's code in the user code and
> got much higher performance that way. That saves of the trampoline call at
> least... It should be easy to repeat that experiment as to check out how it
> affects performance.
>
> Condition variables... Hmm. Do you have any clue at what granularity they
> work?
>
>
> Brgds :) Mikael
>
> 2013/3/10 Dimitris Vyzovitis <vyzo at hackzen.org>
>
>> +1
>>
>> If I may add, point (f) should be a priority. i/o performance is
>> severely hampered by the constant mutex acquisitions.
>> At a  first approximation, each individual write should stop writing
>> one character at a time (with a mutex lock each time).
>> Having full control of whether a mutex is used at all is even better,
>> as it is almost always the case that a single thread is
>> reading/writing a port (exceptions being the std ports). The problem
>> there is the way the i/o system i simplemented though, as the events
>> are tied to condition variables and these in turn tied to mutexes to
>> work reliably.
>>
>> PS: I owe you some coroutines :)
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.iro.umontreal.ca/pipermail/gambit-list/attachments/20130311/132896b0/attachment.htm>


More information about the Gambit-list mailing list