This is a placeholder topic for “Gravity: TCS3430 Tristimulus Color Sensor” comments.

DFRobot Gravity: TCS3430 XYZ Tristimulus Color Sensor employs the new generation of TCS3430 spectral sensor IC launched by the well-known AMS company. It features…
Read more
1 Like
I am looking for RGB sensor for kids who will do line following with Raspberry PI. Ive bought PiicoDev but it can use one sensor only, then I bought them Grove color sensor but RGB values are garbage and no use for them
SO my question is thi sensor any good? What I am looking is that black and white rgb is close to real like 000 or 255 255 255
And yes I will need multiplexer for it
thank you.
1 Like
Hi @Vova.
I don’t mean to side-step your question but would an infrared line sensor suit your application? 
Hi Vova,
Welcome to the forum! Pix has beat me too it with the IR Line Senor recommendation. Line following robots have spearheaded many peoples intro into robotics for a long time now so there are plenty of integrated solutions out there.
All colour sensors will have a bit of a hard time performing this task as there will always be some light reflected off of a black line. You could include a tolerance into your code to get around this as well. This sensor will also almost certainly have the same shortcomings as your last.
Let us know what you decide on, we’d love to see how your robot progresses!
1 Like
Thank you for replies. Well we just got Raspberry Pi Build HAT from you, which can use to connect lego sensors. IR line sensor can be an option too but they want to try using OpenCV and Camera.
2 Likes
Ah that’s a great project.
The relvant opencv functions are contours.
Here is an example in python.
contours, _ = cv2.findCountours(dataframe, 1) #consider optional arg [cv2.CHAIN_APPROX_NONE]
edge = (0,255,0) #(0,255,0) tuple represents byte-wise grayscale edge detection.
cv2.drawContours(outframe, contours, -1, edge, 1)
In computer vision there is something about hierarchy but I can’t exactly remember how that all fits in.
You’re meant to use it to distinguish the dominant shape in an array.
I’m a little rusty on my opencv but I did a quick google and this page from the official docs looks promising.
https://docs.opencv.org/4.x/d9/d8b/tutorial_py_contours_hierarchy.html
If I wanted to use an XYZ colours sensor with opencv I think I would convert the output from the sensor into a numpy array and then normalize that array by the nat_log. i.e. I would square each value such that it bounded on 0-255 chars.
I believe this will help compensate for any inconsistencies in the data.
# Pass normalized_v into your findContours logic
normalized_v = v / np.sqrt(np.sum(v**2))
3 Likes
Than you. Will pass to them 
1 Like