On 5/23/2025 2:04 AM, Liz Tuddenham wrote:
I was assuming very fast sampling so that the presentation of each line
was captured by many samples, that way the software could sort it out
over a large number of repeated passes. Keep the hardware simple and
let the software deal with the errors if it can be given enough data to
start with.
>
Are you expecting to frequently sample the entire spectrum in each
"pass" ("revolution")? Or, walk the sampling window up/down the spectrum
in stages?
I was expecting to sweep the whole spectrum at high speed many times,
then analyse the captured data. Television-type technology could easily
cope with that data rate from a single photocell.
I was looking at the opposite approach; continuously reducing data
to keep memory and processing requirements modest -- no idea what
OTHER things a processor would be taxed with doing WHILE gathering
this data.
I.e., how much time are you expecting to spend PROCESSING the sampled
data vs. acquiring more data?
The ratio can be varied by either the user or the designer of the
instrument. If greater accuracy is required, it will take longer to do
both the capture and the analysis.
[...]
>
E.g., as presented to me, there was no need for calibration against
a reference standard, "flat" response across the spectrum, etc.
>
A "laboratory grade" device likely WOULD impose such specifications.
And, bear an associated (likely high) cost.
Some sort of reference source could be used to generate a known spectrum
every 'n' passes; this would also serve for synchronising purposes.
If you, instead, just interpret some "local" source as a reference,
then all you need to do is compare your current data to that reference.
As long as your detector's response doesn't drift faster than your
exposure to new "references", the user never sees the issue of
"calibration".
E.g., I designed a sensor array that monitored for the presence of
samples and reagents in some 60 different locations, concurrently.
The *process* could be exploited to give the software the upper
hand on making deductions so the hardware could be dirt cheap
(there was a -50% +300% tolerance from sensor to sensor -- BUT,
there were times when you could KNOW that certain sensors were
"seeing nothing" so X on sensor 1 and Y on sensor 2 each represented
the same state, regardless of that inherent manufacturing tolerance)
Throughout my career, I've eschewed "calibration" as a costly
additional option that doesn't usually add value to a "process".
E.g., when tablets are formed, you can only monitor the forces
exerted on them as a variable amount of material is compressed to
a fixed geometry; or, the resulting size as a fixed force is
exerted.
But, most solid dosing forms don't care about forces or dimensions
of individual doses. Instead, they are concerned with the actual
MASS of the material that "happened" to be present for the event.
Yet, you can't WEIGH to a submilligram precision individual
items at a sample rate of hundreds per second! (but, you CAN
measure sizes or forces).
So, you create a product specific profile of force vs. mass (or,
size vs. mass, depending on manufacturing technology used).
And, set control limits based on bogo-units (you only care
what those actual forces/sizes are if you want to have process
constraints that are portable from machine to machine -- and,
you don't typically care about that!)
Want to be able to claim NBS traceability for your measurements?
OK, but does that improve the quality of your product? Or,
just make you feel like you're more sophisticated?
There would be no need to accurately control the rotational speed as
long as it was steady in the short term. The reference spectrum would
calibrate the span and the end points; it could also calibrate the
spectral amplitude response of the photo-detector.
A small gas-filled discharge tube, pulsed by an ignition transformer,
would suffice for non-critical calibration.
As would the "flashlight" in the phone the user happened to have in
his pocket, at the time.