Forum Replies Created

Viewing 15 posts - 16 through 30 (of 37 total)
  • Author
    Posts
  • PeterH
    Participant
    Post count: 41
    in reply to: fork() #1738

    global_address = local_address xor shift(turf_id) sounds like a kind of lightweight segmentation. Shades of the old 8086 segmentation if it had a virtual memory mapper. Though with global address space mapping 1:1 to every local space. And every local space mapping to every other, though a use for that isn’t obvious. An obvious implication is that when allocating global space to a turf some strategy must be used to avoid a collision with a forked child turf. In regard to David’s question, local space to a turf need not be contiguous, though it must not be allocated mindlessly.

    I’m thinking that if you had code like

    mystruct * foo = malloc(…);

    malloc() would return a local pointer, which the process would convert to global when needed. Is this handled by the memory IO operations?

    On the other hand, a service returning a pointer might return a global pointer, such as for a file structure open for both parent and child.

  • PeterH
    Participant
    Post count: 41

    The low electric power vs. performance projected for the Mill makes it a natural for mobile devices. As for the new SOC every year, I doubt qualcomm is doing a complete new design every time anywhere close to the Mill initial development.

  • PeterH
    Participant
    Post count: 41

    Looks like bounded pointers would frequently miss the relatively common array access one past the end. But it would seem likely to trigger if someone attempted to exploit a bug similar to heartbleed, especially if they got greedy trying to grab a larger amount of data.

  • PeterH
    Participant
    Post count: 41

    I don’t see the Mill as being inherently much more (or any less) secure than established “large” machine architectures, but as offering low cost access to methods, such as microkernel, that are high cost on existing architecture. The operating system is still in charge of applying the hardware mechanisms, and buggy system utilities may still leave you wide open.

  • PeterH
    Participant
    Post count: 41
    in reply to: Security #879

    Can the handler called through a portal easily identify who called it? Suppose a service is being called by many threads in different turfs, such as a service reading files? You don’t want just any thread to access just any of the managed resources, and the caller can’t be trusted to identify itself by simple parameters passed.

  • PeterH
    Participant
    Post count: 41

    While the Mill operations are polymorphic at issue, they are not so at retire: the latency of (most) operations varies with the actual widths of the operands.

    Which means the specializer needs to know the size of the operands. I was thinking otherwise. Simplifies a consideration for division.

    A security consideration: when calling a service what prevents a parameter of the wrong size being passed and dropping a value on the wrong location on the belt? Is an exception thrown if operand sizes mismatch?

  • PeterH
    Participant
    Post count: 41
    in reply to: Memory #1659

    Regarding context switches, how much of the permissions buffer is cached? Given how it operates in parallel with L1 cache, I’m thinking the permissions cache would be about the same size. What I’m unclear on is how much of the total that represents and how often a portal call will need to load new permission table elements. Then again, this could be handled along with the call prefetch.

    Regarding SIMD, I recently read a report on some benchmarking with a test case resembling

    int a,b,c,d;

    loop_many_times
    {
    a++; b++; d++;
    }

    that ran slower than a test case applying to all 4 vars. The case incrementing all 4 could use x86 family SIMD. Applied to the mill I can see this case being implemented by loading from all of a through d then applying a null mask on the var not being altered.

  • PeterH
    Participant
    Post count: 41
    in reply to: The Belt #1602

    ” is there anything to prevent the Bad Guy from creating a fake additional function entry point which launches a MUL and then simply *jump*s to the initial EBB of the target function, with the MUL still in flight?”

    If the target function is a properly set up service, I understand Mill security would only allow it to be called through a portal as a subroutine, not jumped to. If the function could be jumped to the code would still run in the attacker’s security domain, unable to access anything the attacker couldn’t get to by normal means. A program messing up stuff in it’s own security domain can’t realisticaly be prevented by operating system and hardware.

  • PeterH
    Participant
    Post count: 41
    in reply to: Pipelining #1238

    Given that the Mill will be capable of operating on vectors, I’d expect the RGB pixel to be represented, at least on large enough members, by a single vector. I’m not clear on what operations might be available for vector shift and element access, that might be useful for filtering a stream of data.

  • PeterH
    Participant
    Post count: 41

    For most services I see work that can’t be done in the context of the requesting call being done in interrupt driven threads. One factor, a thread from process A may have access to perform the service for A, but not for process B when running service code.

    Accounting time in a service to other than the nominal owner of a thread is hard if you don’t trust the service on the matter. In a general case one process (system scheduler in this case) knows little of how another works.

  • PeterH
    Participant
    Post count: 41

    When I think of an OS on the Mill, I think of turfs being associated with processes, services, and libraries.
    A process has at least one persistent thread (which may be suspended for multitasking).
    A service has threads launched by hardware interrupts, doing a quick job, then exiting.
    A library has no threads associated with itself.
    Libraries will export portal calls into themselves, services and processes may export portal calls.
    When a process thread makes a portal call into a library/service, the thread remains associated with the process for general accounting purposes. Separate accounting for top and profilers of processes/turfs would make sense.

    I’m thinking a kill call aimed at a process should generally not immediately kill a thread currently running in another turf as that would tend to corrupt shared data structures in a library/service. The thread would continue until it returned to its home turf, but that presents another level of complexity.

  • PeterH
    Participant
    Post count: 41
    in reply to: The Belt #1008

    KSRM, I believe the way the operation would usually be done on a Mill, first you have an unconditional increment of a, then a conditional select, the select being a “zero time” operation.
    (a b)
    (a+1 a b)
    (a a+1 a b) or (a+1 a+1 a b)
    A given operation always adds the same count of results to the front of the belt. If you use branching you need to make sure the belt is always set up correctly before your branches merge.

  • PeterH
    Participant
    Post count: 41
    in reply to: Security #997

    A RNG need not be long latency, depending on the requirements. Requirements for cryptography vs. a Monte-carlo simulation are different. For cryptography you ideally want a generator that can’t be predicted, even knowing the last N numbers produced. If this takes 300 cycles to get a result, so be it, you aren’t asking for than many random numbers. A Monte-carlo simulation, on the other hand, can accept less random results but likes them fast.

    A hardware LFSR based generator should be faster than an adder, but is completely unsuitable for cryptography, far too predictable. A software based LFSR in the same code using the numbers I’d estimate running 1 vector of results/cycle on the mill.

    Reading from a bank of asynchronous oscillators is fairly fast and pretty good if the sampling is slow compared to the oscillator rate. But this takes power to run the hardware. So high power consumption, slow sampling, or weak randoms. If combined with another independent method you can get top grade randoms.

  • PeterH
    Participant
    Post count: 41
    in reply to: Security #916

    Allowing that the OS can give threads with a shared security domain common local memory in the service turf, putting service state in thread local memory should work beautifully. A handle may be a pointer, and any thread that can access the appropriate memory space in the service turf can then use the handle. Nice and fast.

    And since attempted read to forbidden memory produces metadata state, the service can check if the handle is valid for very low cost.

  • PeterH
    Participant
    Post count: 41
    in reply to: Security #876

    First class hardware random number generators aren’t difficult. The old Atari systems C. 1980 had them. But I don’t see them as a core feature of CPU hardware. An opcode in the generic code representation wouldn’t hurt, with an option for the generator as a specialty register.

Viewing 15 posts - 16 through 30 (of 37 total)