Do you what changes were made in MicroPython from v1.9 to v1.13 that means I can no longer define an integer as a CONST()? I now get an error for
init_reg = const(b"\xEF\x00\x37\x07\…etc)
In MP1.13 error states this must be an integer, yet this line works fine using MP1.9
Looking at the MicroPython Changelog it appears there were a number of changes to the const table in v1.10, but MicroPython is a beast and my knowledge of the internals is limited, so it’s hard to say exactly what it could be.
What hardware are you using MP on, and what is the complete error message it is giving you? Those details might give us some more clues, and perhaps indicate if your hardware has an even more recent version of MP available.
I am using a Seeed microbit Bit Gadget expansion board, with a Grove gesture sensor plugged into the i2c port.
The Grove Gesture sensor is working fine using the I2C port on the Bit Gadget Kit expansion board when using a V1 microbit but not when using a V2 microbit with exactly the same code and setup. The code consistently throws up this error:
File “gesture.py”, line 6
SyntaxError: constant must be an integer
Hey @David245188 I would wager
const() was simply dropped from the Micro:bit build of MicroPython. The micro:bit runs a custom version of MicroPython that removes many functions to make room for cramming in all the device drivers for the hardware that is onboard the microbit PCB (IMU, display, Bluetooth etc.)
This is all necessary because the Micro:bit has somewhat limited resources
In MicroPython version 1.13, a change was made to the implementation of the
const() function. Prior to version 1.13, the
const() function could be used to mark any Python value as a constant, including integers.
However, in version 1.13, the
const() function was updated to only accept integer literals as arguments. This means that you can no longer use the
const() function to mark a variable containing an integer as a constant.
Rooppoor212784 - thanks but are you certain? I was using const() with an integral argument.
I think that what’s happening is that the expression is seen as a byte string that needs to be evaluated.
It might evaluate to an integer or it might not, but the compiler doesn’t know which.
Interesting thought, but does not explain why it does not object when using a V1 microbit.