Liste des Groupes | Revenir à c arch |
On 3/11/2025 12:57 PM, MitchAlsup1 wrote:--------------
There is already a driver in BOOT that reads config headers forMy whole space is mapped by BAR registers as if they were on PCIe.>
>
Not a thing yet.
>
But, PCIe may need to exist for Linux or similar.
>
But, may still be an issue as Linux could only use known hardware IDs,
and it is a question what IDs it would know about (and if any happen to
map closely enough to my existing interfaces).
>
Otherwise, would be necessary to write custom HW drivers, which would
add a lot more pain to all of this.
No, a manufacture:device for every CPU-type on the die. Then all of>Some read-only CSRs were mapped over to CPUID.>
I don't even have a CPUID--if you want this you go to config space
and read the configuration lists and extended configuration lists.
>
Errm, so vendor/Hardware ID's for each feature flag...
30 and 31 give the microsecond timer and HW-RNG, which are more relevantThe timer running in virtual time or the one running in physical time ??
to user-land.
32..63: Currently unused.By placing the timers in MMI/O memory address space*, accesses from
>
>
There is also a cycle counter (along vaguely similar lines to x86
RDTSC), but for many uses a microsecond counter is more useful (where
the timer-tick count updates at 1.0 MHz, and all cores would have the
same epoch).
>
On x86, trying to use RDTSC as a timer is rather annoying as it may jump
around and goes at a different rate depending on current clock speed.
This scheme will not roll over for around 400k years (for a 64-bitSo at 1GHz the roll over time is 400 years. Looks good enough to me.
microsecond timer), so "good enough".
Conceptually, this time would be in UTC, likely with time-zones handledWhat is UTC time when standing on the north or south poles ??
by adding another bias value.
This can in turn be used to derive the output from "clock()" andWe used to run a benchmark 1,000,000 times in order to get accurate
similar.
>
>
Also, there are relatively few software timing tasks where we have much
reason to care about nanoseconds. For many tasks, milliseconds are
sufficient, but there are some things where microseconds matters.
>x86 uses a narrow bus that runs around the whole chip so it can access>Of which, all of the CPUID indices were also mapped into CSR space.>
CPUID is soooooo pre-PCIe.
>
Dunno.
>
Mine is different from x86, in that it mostly functions like read-only
registers.
RISC-V land seemingly exposes a microsecond timer via MMIO instead, butOr a generous MMU handler that lets some trusted low privilege level
this is much less useful as this means needing to use a syscall to fetch
the current time, which is slow.
Doom manages to fetch the current time frequently enough that doing soI had an old Timex a long time ago that I had to adjust the time
via a syscall has a visible effect on performance.
You are not trying to access 1,000 ACHI disks on a single rack, either;>>
My 66000 does not even have a 32-bit space to map into.
You can synthesize such a space by not using any of the
top 32-address bits in PTEs--but why ??
>
32-bit space is just the first 4GB of physical space.
But, as-is, there is pretty much nothing outside of the first 4GB.
>
>
The actually in use MMIO space is also still 28 bits.
The VRAM maps 128K in MMIO space, but in retrospect probably should haveOne would think the very minimum to be do 32-bit color (8,8,8,8)
been more. When I designed it, I didn't figure there would have been
more than 128K. The RAM backed framebuffer can be bigger though, but not
too much bigger, as then screen refresh starts getting too glitchy (as
it competes with the CPU for the L2 cache, but is more timing
sensitive).
I see a big source of timing problems here.>>
My interconnect bus is 1 cache line (512-bits) per cycle plus
address and command.
>
My bus is 128 bits, but MMIO operations are 64-bits.
>
Where, for MMIO, every access involves a whole round-trip over the bus
(unlike for RAM-like access, where things can be held in the L1 cache).
>
In theory, MMIO operations could be widened to allow 128-bit access, but
haven't done so. This would require widening the data path for MMIO
devices.
>
Can note that when the request goes onto the MMIO bus, data narrows to
64-bit and address narrows to 28 bits. Non-MMIO range requests (from the
ringbus) are not allowed onto the MMIO bus, and the MMIO bus will not
accept any new requests until the prior request has either finished or
timed out.
Les messages affichés proviennent d'usenet.