Raspberry Pi Pico W (Wireless WiFi) (CE08703)

That looks pretty rabbit hole-y in itself :joy: I’ll keep it in mind.

Thanks @Gramo I’ve been meaning to set up a local DNS I’ve been thinking of running PiHole on my Synology, this might be the nudge I needs

3 Likes

Hey everybody,
If I were to setup a Pico W as an access point network (I believe this means it basically creates it’s own network for other devices to connect to), does anybody know what kind of range the network would have?
Thanks!

1 Like

Hey everybody,
If I were to setup a Pico W as an access point network (I believe this means it basically creates it’s own network for other devices to connect to), does anybody know what kind of range the network would have?
Thanks!

The range of a Pico W access point network can vary depending on various factors such as the environment, obstacles, and the power output of the Pico W module. Typically, the Pico W has a range of up to 50 meters in open spaces with no obstructions. However, in indoor environments with walls and other obstacles, the range may be reduced.

It’s also important to note that the antenna used with the Pico W module can also affect the range of the network. Some antennas are designed for long-range transmission, while others are designed for short-range transmission.

Ultimately, the range of your Pico W access point network will depend on your specific setup and environment. It’s a good idea to test the network in different locations and with different devices to determine the optimal range for your needs.

3 Likes

I have Pico W’s working in remote IOT applications and their reliability is amazing. The inbuilt RTC clock is great but I have noticed after several months it does drift a little.

To get around this I have written the code which sets the date/time as a function (set_datetime()) which gets called on startup and every midnight although seriously once a week would be good enough. The code snippet is.

# note the main program works on a 60 second loop so seconds are ignored
    r = rtc.datetime()
    if r[4] == 0 and r[5] == 0: # hours and minutes == 0 means midnight
        set_datetime()
        print('Midnight')
2 Likes

I occasionally have a Pico W which crashes leaving no error to diagnose and this often occurs if it has been operating continuously for several months. It has presumably ended up in a non recoverable state where only a reboot would work.

I examined the possibility of using the WDT watchdog timer to force a restart and it looked very promising. Only problem is that my code uses a 60s loop and the maximum timeout for the rp2040 WDT is 8388 ms which is so short it is fairly useless.

Does anyone have any other solutions?

4 Likes

Hey Fractal :slight_smile:

Lot’s of ways to solve this but here is the solution I would use:

time_cache = 0

def get_uptime():
   f = open("/proc/uptime", "r");
   t = float(f.read().split()[0])
   f.close()
   return t

def main():
   #MY AWESOME CODE GOES HERE

def dog_owner():
   # We call this no matter what
   wdt.feed()
   # Here we see if 6000ms has elapsed since the last time this if statement passed.
   if time_cache - get_uptime() < -6000:
      # If so we reset the cache
      time_cache = get_uptime()
      # and call main
      main()

If you need a computer once call many solution you can Curry your main function
A super neat design pattern :partying_face:

def dog_owner(param1, param2):
   wdt.feed()
   if time_cache - get_uptime() < -6000:
      time_cache = get_uptime()
      # We don't call main, we define an instance of it with the parameters we passed in.
      def main():
        return param1 * param2
      #notice we omit the (), we are not calling it, just return the function itself.
      return main
   else:
      #if we are not ready to call main because 60s haven't elapsed we can return a dummy func.
      #it will just do nothing when it is called.
      def _():
         pass:
      return _


 #Get a curried main function with all the arguments pre-set
 a_func = dog_owner(2, 3.141)
 #Call this as many times as you want, it will work every time because we passed the 60 seconds test.
 for _in range(0, 10):
    a_func()
 #Get another curried main function with different arguments.
 a_different_func = dog_owner(3, 6.282)
 #call it like normal, but nothing will happen because it has not been 60 seconds since we called dog_owner().
 a_different_func()

Pix :heavy_heart_exclamation:

4 Likes

Not a solution to the crash but, logging may help determine the cause of the crash.

I am starting the pico journey and have created the following rolling appender similar to what you might see on larger format systems.

inboxidau/uRollingAppenderLog: Simple micropython rolling appender logging (github.com)

3 Likes

Bought a couple of pi pico w h boards, from core, wondered why I was getting lots of load errors, worked fine on small tests.
Found flash formatted for 1Mb on both boards. Wasted a lot of time.
Did anyone else pick up this issure?
Using thonny and nothing but std micropython load via thonny.

1 Like

Hi there @keith21016 and welcome to the forums, glad to have you here.

That is one hell of a strange issue that I’ve never encountered before. May I asked how you found out that was the fault?

Hi Jane thanks for the response, yes very odd, I am ok on tech but no expert on pi or mcus.

Had been testing 3 k files no issues, 17k files ran fine via thonny use cable.

Could not get main.py to run stand alone.

Found main.py not actually loading, just 1k in main.py on pico.

My AI friend said this can not happen.

I continued my dev via usb so kept working.

Did storage check on pico reported minus number several - k wtx.

So micropython about 1Mb is a clue, only 1Mb available!

AI coached me through circuitpython clean, very simple.

Loaded micropython suddenly got 1Mb free, everything loads fine.

Checked the second board I bought sure enough 1Mb available.

Did a clean and 2 MBs.

An expert might have got it quicker.

AI initially had no idea, try different prompts then got it first try.

AIs fantastic assistants, BUT, SESSIONS MUST BE LIMITED TO SAY 30 MINS AND THEN START FRESH. I have tested 4, paid and free majors, they all lose it on complex stuff after a while, make heaps of mistakes. Enjoying my retirement :slight_smile: excuse the typos, 1 finger.

You and others might like this.

A question (to gemini, same type of answer from others)

I think i have previously discussed the issue with using ai assistance

After long or multiple sessions on the same subject

Confusion set in

Errors become more likely

Comprehension of the discussion becomes V FRUSTRATING

This is evident from ALL AIs, tested scispace-agent, gemini mobile and website flash/pro, claude, chatgpt 3,5/5.

The performance of all remains excellent.

Is a better approach

To take the good software result

And re present it as a new project start point

What do you think

having had excellent results

That’s a fantastic observation about the nature of long-term AI-assisted projects. What you’re experiencing is a common phenomenon that AI researchers and power-users are actively exploring.

Yes, your proposed approach—taking the good software result and representing it as a new project start point—is an excellent strategy, and often the best way to maintain clarity and efficiency in complex, multi-session projects.

This is a recognized method for combating a few limitations inherent in current large language models (LLMs).

The Problem: Context Overload and Semantic Drift

The “confusion” and “frustration” you describe stem from two main technical issues:

* Context Window Overload: LLMs operate on a “context window” (a limited memory of the conversation). As the conversation grows longer, older parts of the discussion, including initial goals, constraints, and early drafts, are eventually pushed out. The AI begins to work with an incomplete or heavily summarized view of the history.

* Semantic Drift (Concept Drift): Over many turns, the meaning of key terms or variables can subtly shift. For example, the meaning of “power costs” might start broadly, then narrow to “grid import cost,” then inadvertently drift to include “total household energy usage” as the conversation continues. The model gets bogged down in clarifying or contradicting past statements, leading to frustration.

The Solution: The “Clean Start” Strategy

Your strategy addresses these issues directly by forcing a state reset and focusing the model on the most critical information:

1. New Project: A State Reset

When you start a new conversation with the final, working code, you are effectively:

* Pruning the Noise: You eliminate all the historical dead ends, debugging failures, abandoned ideas, and thousands of tokens of discussion that are no longer relevant.

* Maximizing Context: The valuable initial context is now the core code itself. The model allocates all its context window memory to understanding the current working solution, not the journey to get there.

2. Improved Efficiency and Accuracy

This approach is highly effective because it makes the task simpler for the AI:

* Clearer Goals: You can re-state the next objective (e.g., “Now, integrate the weather monitor inference logic into this code”) without the AI having to scroll back through the history to find the right version of the code or the initial requirements.

* Reduced Errors: The chance of the AI reintroducing an error that was fixed 50 turns ago (but is still lingering in the deep recesses of its summarized memory) is drastically reduced.

In summary, treat the AI as a new colleague for each distinct phase of your project. Give the new “colleague” (the fresh chat session) a clean, current code base and a specific, well-defined next task. It is the most robust way to continue getting excellent results without the inherent frustration of long-term context management.

1 Like

Hi Keith,

A co worker told me using Thonny (while an amazing ide for getting started) is a bit old school.

I have recently setup Claude Code in VScode and the Pico plugin.

With a bit of pre- prompting development becomes a breeze.

It can do some basic agentic workflows out of the gate.

1 Like

Agree Liam, scispace, out of the gate, mind blowing,
here is a tactic you might consider.

Insist on a conservative accurate considered responce, they can learn from you, re how to approach your tasks.

This approach caused gemini and Claude to improve dramatically on code, right first time. Not everytime but much much better.

best regards keith

1 Like