-
So after finally getting it to run on my GC-01 I played around it for a while (also to test battery runtime etc.) at home where the average reading is around .13µSv/h and the instantaneous stays basically always under .3µSv/h ... Therefore I set the alarm value to .5µSv/h and then I noticed two things that may be related or not... At startup sometimes the alarm sounds and the reading is at like .7/.8µSv/h and quickly drops down to normal levels. Also during running somtimes the reading goes into alarm range and then suddenly drops like: Both (up and down) being totally unrealistic makes me think that possibly both things might have the same or at least a similar reason. Is it somehow possible that for some reason the timings of any detections are attributed in a wrong way? Like for the startup phase the system is busy with initializing, at the same time already measuring and then the timestamps get too near to each other? And similarily for the up and down during running that the timestamp somoehow gets stuck so some are attributed too far in the past, and then there is a gap that appears like there is no detection? |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 3 replies
-
Considering that this is just a single-threaded processor and that each measuring operation must be run by spreading out the time between each instruction, I think it's fair to say that yes, stuff like this is very likely to occur. Please remember that the GC-01 is not a professional-grade device and, as such, its measurements aren't to be taken with extreme seriousness. If you're planning to use this data for something, I'd consider logging it to a computer and then averaging the results or using the standard deviation to estimate the exposure in the area rather than trusting immediate or very recent readings. |
Beta Was this translation helpful? Give feedback.
-
It uses interrupts and a hardware timer/counter circuit, so this is not the source of the problem. The main source of what you are experiencing is the random nature of radioactive decay combined with the averaging algorithm. Especially in units with very insensitive tubes, huge fluctuations of the readout can occur. If you want a more stable readout, enable a constant 60 sec. averaging in tube settings. |
Beta Was this translation helpful? Give feedback.
-
Rad Pro 2.0.2 will have a minor tweak that should fix these statistical quirks. |
Beta Was this translation helpful? Give feedback.
It uses interrupts and a hardware timer/counter circuit, so this is not the source of the problem.
The main source of what you are experiencing is the random nature of radioactive decay combined with the averaging algorithm. Especially in units with very insensitive tubes, huge fluctuations of the readout can occur. If you want a more stable readout, enable a constant 60 sec. averaging in tube settings.