Mill Computing, Inc. › Forums › The Mill › Architecture › How would threading if I want a service as an interrupt?
Tagged: threading interrupt
- AuthorPosts
- #3565 |
Let’s say I want to read multiple files from multiple harddisk at the same time (or we can say TCP connections). A simple way is to open the files one by one and read them one by one but this is blocking and slow since I could potentially be accessing the other drives in parallel.
How would this work? I specifically remember the mill can use a portal call to switch turfs, or dispatch another thread (same turf) but not both. I imagine if I do something like
a = open("a.binary");
followed byread(a, mybuffer, size)
how would it work? I imagine the service figures out when it’s my code turn is and does both a portal call (to get into my turf) then dispatch my thread?Now what if I wanted to get some kind of message that my data is ready so I can ask for
otherdrive/b.bin
and process both files 4 or 8K at a time. Would it work much different? My main concern is if I’m in the middle of processing B I shouldn’t stop and go to handle file A (unless I specifically say I want that) however if I’m sleeping I should somehow be interrupted/woken up so I can start processing file A/B. Now lastly what if I actually want an interrupt because I want to handle 3 real time services and I know I can handle them fast enough but want to do low priority stuff in the meantime which can be long or short. Could I have my code interrupted by another service so I can handle things in realtime or is that not allowed unless I’m in the all turf/thread? This is at a level above the hardware and architecture, a matter of OS and language. The hardware supports user-mode hardware interrupt handlers for code (like your friendly local super-collider) that wants to work at that level, but more typically the code would use libraries like threads and asyncio packages, and would approach the problem as exposed by the model the libraries expose.
I think I should clarify. For something like
read
I imagine a portal call would happen and read would add the buffer/size/address to list of operations to be done in the service turf then sits in a loop yielding it’s thread time until the data is ready.However if I’m doing this async what implementation is available for the service to notify the client that the data is ready? I can’t imagine the service would ever want to use it’s own thread to do work for the client. Some implementations I can think of are
1. Service thread uses a pointer to marks client data as available/%complete. Client sits in a loop checking to see if data is ready (and sleeping if nothing is ready)
2. Service creates a new thread which then uses a portal to call client async code
3. Service be able to interrupt client. Which appears to be a ‘knight move’ (changing the thread and turf) which you said is not possible on the mill.I was wondering which would be optimal or possible on the mill. For sure the first is possible, the third I suspect not. As for the second option is spawning a thread everytime less than optimal? If the app/client is poorly behaved and never returns from the portal callback could this cause problems?
After writing this I suspect #1 is the safest most efficient method but a loop like this reminds me of a spinlock which probably isn’t expensive but isn’t my favorite method of notifying others of data
- This reply was modified 4 years, 2 months ago by CPUSpeedup.
- This reply was modified 4 years, 2 months ago by CPUSpeedup.
I don’t imagine async IO being any different on a Mill vs how things currently work. We don’t use interrupts to tell a userspace application when files are ready. The current model used by pretty much everyone is something like epoll. The application uses a single syscall to determine if many possible IO operations are possible. That syscall may optionally block, timeout, or just return immediately.
In the Mill, a syscall would be a portal. In the event you use the block/timeout option, you made a turf change the moment you portal’d into the “OS”. From there, it can switch to any OS turf thread such as one it previously preempted. However, none of this matters to a user space application which can follow the same programming model it always has.
Exactly; the user app uses whatever interface its host language and/or libraries provide. Thus a blocking call might portal into the OS, which would start up the IO and attach to a condition variable and then call the dispatcher. The thread (i.e. the user’s thread, which has OS calls on top of it) then sleeps until the CV gets notifies, and than exits back to the app.
An async IO would portal into the async library which would do exactly the same thing except omitting the call on the dispatcher. The app can poll the CV. A driver library could register a handler function and then visit the dispatcher and the interrupt would get handled in the app. None of these have anything to do with the ISA except the use of portals instead of process switches.
Gotcha, thanks guys
- AuthorPosts
You must be logged in to reply to this topic.