Sorry if this is off-topic, but I’ve been getting a lot of use out of OpenAI’s ChatGPT, called Assistant.
I’ve found it either gives me the correct answer or sets me in the right direction every time. It’s like an instant Stack Overflow. If the answer it gives is not specific enough, ask a follow up question, and it will clarify.
https://chat.openai.com/chat
This ChatGPT is a phenomenal chatbot. I particularly like how it writes code for you, even if it has some goofy moments. I gave it a go trying to write my next tutorial.
The Question I posed to it -
Please write me a guide on how to use the Gravity: Lightning Distance Sensor with a Raspberry Pi Single Board Computer
[BIG DISCLAIMER THE BELOW IS AI WRITTEN AND IN NO WAY FUNCTIONAL AS A GUIDE] -
Sure, I’d be happy to help with that! Before we begin, it’s important to note that I am a large language model trained by OpenAI, so I don’t have any personal experience with the Gravity: Lightning Distance Sensor or the Raspberry Pi Single Board Computer. My answers will be based on information that is publicly available online, but I will do my best to provide a comprehensive guide.
Here is a step-by-step guide on how to use the Gravity: Lightning Distance Sensor with a Raspberry Pi Single Board Computer:
Start by connecting the Gravity: Lightning Distance Sensor to the Raspberry Pi using the provided cable. The cable should have a 3-pin connector on one end, which will connect to the sensor, and a female jumper on the other end, which will connect to the Raspberry Pi.
Once the sensor is connected, you will need to install the necessary software on the Raspberry Pi. You can do this by running the following command in a terminal window: sudo apt-get install wiringpi
Next, you will need to enable the I2C interface on the Raspberry Pi. You can do this by running the following command: sudo raspi-config
In the configuration menu, navigate to “Interfacing Options” and select “I2C.” Then, select “Yes” to enable the I2C interface.
After enabling the I2C interface, you will need to install the I2C tools on the Raspberry Pi. You can do this by running the following command: sudo apt-get install i2c-tools
Once the I2C tools are installed, you can use the i2cdetect command to verify that the Raspberry Pi is able to detect the Gravity: Lightning Distance Sensor. You can do this by running the following command: sudo i2cdetect -y 1
If the sensor is detected, you should see its address (usually 0x62) listed in the output of the i2cdetect command. If the sensor is not detected, you may need to check the connections and make sure they are secure.
Once the sensor is detected, you can use the i2cget command to read the distance value from the sensor. You can do this by running the following command: i2cget -y 1 0x62 0x5c b
The distance value will be returned in centimeters. You can use this value in your own code to do things like display the distance on an LCD screen, or trigger an action when the distance reaches a certain threshold.
And that’s it! You should now be able to use the Gravity: Lightning Distance Sensor with your Raspberry Pi Single Board Computer. Remember to refer to the sensor’s documentation for additional information and troubleshooting tips.
END of AI Response
It seems to me to be a phenomenal way to brainstorm and scaffold ideas but it does lack the polish/accuracy to replace me currently (but who knows what the future will hold). The fact that it can produce ‘Code’ or act as a ‘Linux Terminal’ is also a huge step up for AI Chatbots. As a note, there have been AI prompts to aid writing code (Mutable AI and CoPilot spring to mind, both no more than a year old) that work in tandem with Visual Studio which I know to be practical, useful tools. It spits out information with a language syntax that makes you feel whatever it says is 100% correct (highly confident delivery) so don’t fall into the trap that what it is saying is actually true. Keep a highly critical eye when using it.
There’s going to be a lot to discover with ChatGPT that has yet to be uncovered. We are at the forefront of this new world and it is properly exciting . Within the next couple of weeks some AI gems from it are undoubtedly going to pop up.
This Chatbot is also not allowed to access the internet and only has information from before 2021. I have no doubt it is aware of the current date (someone probably told it) and I presume it has some understanding of the world as it currently stands through the information we feed it. Much like those archaic captchas we used to do to prove that we weren’t a robot, each bit of information we feed it as a collective will make it better and improve it.
Back when I was at the University the big goal for chatbots was to ‘trick’ people into believing they were real people. That goal has long been surpassed and it is properly exciting how quickly it is snowballing . A lot of our attitude to software to confirm that it isn’t potentially malicious was to trust the google reviews/Reddit posts confirming authenticity. I do hope we don’t lose that strength of community (with these chatbots running around providing positive reviews of bad software) and I am unsure of the best way forward once these chatbots become fully integrated with the internet. I’m sure we’ll figure it out though.
# Import the necessary modules.
import adsk.core
import adsk.fusion
# Get the current application.
app = adsk.core.Application.get()
# Get the active design.
design = app.activeProduct
# Get the root component of the active design.
rootComp = design.rootComponent
# Create a new sketch on the xy plane.
sketches = rootComp.sketches
xyPlane = rootComp.xYConstructionPlane
sketch = sketches.add(xyPlane)
# Draw a circle.
circles = sketch.sketchCurves.sketchCircles
circle = circles.addByCenterRadius(adsk.core.Point3D.create(0, 0, 0), 2)
# Get the profile defined by the circle.
prof = sketch.profiles.item(0)
# Create an extrusion input to be able to define the input needed for an extrusion
# while specifying the profile and that a new component is to be created.
extrudes = rootComp.features.extrudeFeatures
extInput = extrudes.createInput(prof, adsk.fusion.FeatureOperations.NewComponentFeatureOperation)
# Define that the extent is a distance extent of 5 cm.
extInput.setDistanceExtent(False, adsk.core.ValueInput.createByReal(5))
# Create the extrusion.
ext = extrudes.add(extInput)
I threw that into Thonny IDE and then saving it into the correct folder for Fusion 360 Add-In.
^ Potentially World First, Little Bit Academic, and Definitely Esoteric.
As it undoubtedly gets better it will be a true revolution. A great equalisation of CAD skills is being handed to the wider community and I look forward to this future .
I’ve been using to explain some tricky Python concepts for me and generate code I can use or adapt in my projects. I would say it’s spot on 95% off the time. Pretty darn neat!
I’ve been amazed by the limitless possibilities of GPT-4, especially with its vision capabilities. Just imagine how remarkable it would be to incorporate its features with 3D scanning technology, creating highly precise models for various 3D printing applications.
Cheers,
Blayden
1 Like
And you can get our latest projects and tips straight away by following us on: