Usign Raspberry Pi 5 and ADDa

Hi, I Am wondering if you could help me. I Am novice in working with electronic stuff, but I am trying to use a AD/DA 24 bits coupled with a raspberry pi 5 in order to capture data from 3 passive geophone, I have connected them to the AD ports from 0-6 by pairs (positive and negative each geophone) but when displaying the captured data it seems to have a constant voltage I don’t know why or if I should remove the reference voltage jumper or add some resistors to the geophones as mentioned in the example video where there is used a ADDA 16 bits board.

The Raspberry Pi cannot read analog signals, so I presume that your setup is using an Analog-to-Digital Converter (ADC) to read an analog signal from each geophone device and input a digital signal to the Pi. That would explain the two wires. If that’s what you are doing then you would not read those inputs to the Pi as a voltage - instead it would be a digital signal with a communication protocol (perhaps UART, perhaps I2C, or perhaps proprietary) as output by the ADC.

If that’s your arrangement then you should post a link to the specifications for the ADC that you are using so that the digital protocol used by that device can be identified. That will determine the required connections to the Pi and the code (probably from a library) that you will use to read the values from the geophones.

If, on the other hand, the geophones have ADC built-in to the devices, and the reference to AD/DA is to the internal processing carried out within the geophones, then you should post a link to the specifications for the geophones, for the same information about the digital protocol they are using.

Hi @Mauro296964

Welcome to the forum!

So that we can have a better idea of the exact parts that you’re using are you able to send through a link to the items? That will give us a better idea of what we are dealing with specifically.

Hi, thanks for your support I am using a AD/DA High precision board by waveshare https://www.waveshare.com/wiki/High-Precision_AD/DA_Board and a Raspberry pi 5

Have you confirmed from the AD Demo sketch that the AD/DA board is working correctly? You should be able to turn the potentiometer and see the AD0 channel voltage change, or block the photoresistor and see the AD1 channel voltage change. You can connect a lead from the 3.3V pin to any of the other channels and confirm that it reads as ~3.3V.

Do you have a reference link for the sensor? That will have the information you need for the reference voltage. Note that if you need to adjust the voltage from the sensors to match the requirements of the AD/DA device you will not use a simple resistor - you will use a voltage divider.

1 Like

Actually I have tried with another sensor and modified the script and it worked, thanks for your suggestions. Now I am getting a very nice signal, but instead of capturing 3 differential inputs, the signal displayed in a simple real time visualization and the data captured file is only saving the input from the ports AD0-AD1, debugging the script I found the spi communication is falling, or the AD board is not making the correct changes to read the differential inputs any suggestion with this new mistake?

Hi @Mauro296964,

Thanks for the extra detail. It sounds like you’re following the Core Electronics guide that uses a 16-bit ADS1115 ADC with a passive geophone, and adapting it to work with your Waveshare High-Precision AD/DA Board (ADS1256). That’s a solid approach.

Can you confirm that you’re using the Geophone - Sm-24?

Hey, I am not, I am trying to make a standard script in other use some sensors I have by adjusting the script according to the sensors.

1 Like

Hi @Mauro296964,

Great to hear the signal is now coming through, that’s a big step forward!

Sound like you’re on the right track with the new issue, if you’re only seeing data from AD0–AD1, and nothing from the other differential pairs (e.g. AD2–AD3, AD4–AD5), it’s likely related to channel switching or SPI communication timing in the script.

The ADS1256 selects which input pair to measure by configuring its internal multiplexer (MUX) register. Your code or the library you are using needs to update this MUX setting before starting a conversion to choose the correct differential input pair.

This means that to read multiple differential channels, your script must:

  • Write the appropriate values to the MUX register to select the desired input pair.
  • Allow a short settling time (a few milliseconds) after changing the input selection before reading the conversion result.

How you do this depends on the specific library or driver you are using.

If your script only reads data from one channel continuously without changing the MUX, you will only get data from that channel.

If you can share the library or code you are using, we can help you identify how to properly switch between differential inputs.

Here it is, the code i am using

import time
import numpy as np
import spidev
import RPi.GPIO as GPIO
import os
import signal
import sys
import multiprocessing
import logging
from obspy import Stream, Trace, UTCDateTime
from multiprocessing.shared_memory import SharedMemory
from collections import deque # NEW: Import deque
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
import multiprocessing.sharedctypes

# --- Configuration Constants ---
CHANNELS = {
    "EHZ": 0,
    "EHN": 1,
    "EHE": 2,
}

VREF = 5
ADC_RESOLUTION = 24
ADC_MAX_COUNT = 2**(ADC_RESOLUTION - 1) - 1
ADC_MIN_COUNT = -2**(ADC_RESOLUTION - 1)

ADC_GAIN = 1

GEOPHONE_SENSITIVITY = {
    "EHZ": 400.0,
    "EHN": 400.0,
    "EHE": 400.0
}

RST_PIN = 18
CS_PIN = 22
DRDY_PIN = 17

# --- GPIO and SPI Initialization (unchanged, assumed to be configured correctly in main) ---
GPIO.setmode(GPIO.BCM)
GPIO.setup(RST_PIN, GPIO.OUT)
GPIO.setup(CS_PIN, GPIO.OUT)
GPIO.setup(DRDY_PIN, GPIO.IN, pull_up_down=GPIO.PUD_UP)

spi = spidev.SpiDev()
spi.open(0, 0)
spi.max_speed_hz = 3900000
spi.mode = 0b01

acquiring = True

logging.basicConfig(level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s")

# --- Helper Functions (unchanged) ---
def signal_handler(sig, frame):
    global acquiring
    logging.info("Stopping acquisition...")
    acquiring = False
    time.sleep(0.5)

def digital_write(pin, value):
    GPIO.output(pin, value)

def digital_read(pin):
    return GPIO.input(pin)

def delay_ms(delaytime):
    time.sleep(delaytime / 1000.0)

def spi_writebyte(data):
    digital_write(CS_PIN, GPIO.LOW)
    spi.writebytes(data)
    digital_write(CS_PIN, GPIO.HIGH)

def spi_readbytes(num_bytes):
    digital_write(CS_PIN, GPIO.LOW)
    data = spi.readbytes(num_bytes)
    digital_write(CS_PIN, GPIO.HIGH)
    return data

def configure_adc(gain=0x00, data_rate=0x82):
    """Properly configures ADC with support for all channels (AIN0-AIN5)"""
    
    # 1. Stop continuous conversion mode if active
    digital_write(CS_PIN, GPIO.LOW)
    spi.xfer2([0xFE])  # SDATAC command
    digital_write(CS_PIN, GPIO.HIGH)
    time.sleep(0.01)

    # 2. Hardware reset
    digital_write(RST_PIN, GPIO.LOW)
    time.sleep(0.1)  # Reset pulse width > 4 clock cycles
    digital_write(RST_PIN, GPIO.HIGH)
    time.sleep(0.2)  # Extended wait for full reset

    # 3. Wait for DRDY to go low (device ready)
    timeout = time.monotonic() + 0.5
    while digital_read(DRDY_PIN) == GPIO.HIGH:
        if time.monotonic() > timeout:
            logging.error("ADC not responding after reset!")
            return False
        time.sleep(0.001)

    # 4. Configure all necessary registers with verification
    try:
        # Register configuration (STATUS, MUX, ADCON, DRATE)
        config = {
            0x00: 0x00,  # STATUS: Default settings
            0x01: 0x01,  # MUX: Default to AIN0 and AIN1
            0x02: (0 << 5) | (0 << 3) | gain,  # ADCON
            0x03: data_rate,  # DRATE
            0x04: 0x00,  # IO: GPIO all inputs
        }
        
        # Write configuration
        for addr, value in config.items():
            digital_write(CS_PIN, GPIO.LOW)
            spi.xfer2([0x50 | addr, 0x00, value])  # WREG command
            digital_write(CS_PIN, GPIO.HIGH)
            time.sleep(0.001)
        
        # Verify configuration
        for addr in config.keys():
            digital_write(CS_PIN, GPIO.LOW)
            spi.xfer2([0x10 | addr, 0x00])  # RREG command
            readback = spi.xfer2([0x00])[0]
            digital_write(CS_PIN, GPIO.HIGH)
            
            if readback != config[addr]:
                logging.error(f"Register 0x{addr:02x} mismatch! "
                            f"Wrote 0x{config[addr]:02x}, read 0x{readback:02x}")
                return False
        
        logging.info("ADC registers configured successfully")
        
    except Exception as e:
        logging.error(f"Register configuration failed: {str(e)}")
        return False

    # 5. Perform system calibration
    try:
        logging.info("Performing system calibration...")
        
        # Offset calibration
        spi_writebyte([0xF0])  # OFSCAL
        timeout = time.monotonic() + 1.0
        while digital_read(DRDY_PIN) == GPIO.HIGH:
            if time.monotonic() > timeout:
                raise TimeoutError("Offset calibration timeout")
            time.sleep(0.001)
        
        # Full-scale calibration
        spi_writebyte([0xF1])  # SYSGCAL
        timeout = time.monotonic() + 1.0
        while digital_read(DRDY_PIN) == GPIO.HIGH:
            if time.monotonic() > timeout:
                raise TimeoutError("Full-scale calibration timeout")
            time.sleep(0.001)
            
        logging.info("ADC calibration complete")
        return True
        
    except Exception as e:
        logging.error(f"Calibration failed: {str(e)}")
        return False
def read_adc_channel(channel_index):
    mux_settings = [
        ((0x00) << 4) | (0x01),  # EHZ: AIN0-AIN1
        ((0x02) << 4) | (0x03),  # EHN: AIN2-AIN3
        ((0x04) << 4) | (0x05)   # EHE: AIN4-AIN5
    ]

    if channel_index >= len(mux_settings):
        logging.error(f"Invalid channel_index: {channel_index}")
        return None

    #
    digital_write(CS_PIN, GPIO.LOW)
    spi.xfer2([0xFE])  # SDATAC command (Stop Data Continuous)
    digital_write(CS_PIN, GPIO.HIGH)
    time.sleep(0.0001) # 

    #.
    digital_write(CS_PIN, GPIO.LOW)
    # 
    spi.xfer2([0x50 | 0x01, 0x00, mux_settings[channel_index]])
    digital_write(CS_PIN, GPIO.HIGH)
    time.sleep(0.001) # .

    # 
    digital_write(CS_PIN, GPIO.LOW)
    spi.xfer2([0xFC]) # SYNC command
    digital_write(CS_PIN, GPIO.HIGH)
    time.sleep(0.00001) # 

    digital_write(CS_PIN, GPIO.LOW)
    spi.xfer2([0xFF]) # WAKEUP command
    digital_write(CS_PIN, GPIO.HIGH)
    time.sleep(0.00001) #

    # 
    timeout_start = time.monotonic()
    while digital_read(DRDY_PIN) == GPIO.HIGH: # 
        if time.monotonic() - timeout_start > 0.5: #
            logging.error(f"DRDY stuck HIGH for channel index {channel_index}. Cannot acquire data.")
            return None
        if not acquiring: return None
        time.sleep(0.000001) # 

    # 

    # 
    digital_write(CS_PIN, GPIO.LOW)
    spi.writebytes([0x01]) # RDATA command (Read Data)
    data = spi.readbytes(3) # 
    digital_write(CS_PIN, GPIO.HIGH)

    # 
    value = (data[0] << 16) | (data[1] << 8) | data[2]
    if value & 0x800000:  # 
        value -= 0x1000000
    
    
    voltage = (value * VREF) / (2**(ADC_RESOLUTION - 1)) / ADC_GAIN
    
    # logging.debug(f"Read {voltage:.6f}V from channel {channel_index}") # 
    return voltage
    
def save_to_miniseed(data, start_time, sampling_rate, file_name):
    """Save data to MiniSEED format."""
    traces = []
    for channel_code, channel_data in data.items():
        if channel_data:
            trace = Trace(data=np.array(channel_data, dtype=np.float32))
            trace.stats.station = "UNI"
            trace.stats.network = "NU"
            trace.stats.channel = channel_code
            trace.stats.starttime = start_time
            trace.stats.sampling_rate = sampling_rate
            traces.append(trace)

    if traces:
        stream = Stream(traces)
        stream.write(file_name, format="MSEED")
        logging.info(f"Data saved to {file_name} with {len(traces[0].data)} samples.")
    else:
        logging.warning(f"No data to save for {file_name}.")

# --- Multiprocessing Worker Functions ---
def acquisition_worker(sampling_rate, buffer_size, shm_name, shm_shape, shared_current_index):
    global acquiring
    sampling_interval = 1.0 / sampling_rate
    
    existing_shm = SharedMemory(name=shm_name)
    buffer = np.ndarray(shm_shape, dtype=np.float32, buffer=existing_shm.buf)

    buffer_index = 0
    next_sample_time = time.monotonic()

    logging.info(f"Acquisition worker started with sampling rate {sampling_rate} Hz.")

    while acquiring:
        try:
            current_time = time.monotonic()
            
            if current_time < next_sample_time:
                time.sleep(max(0, next_sample_time - current_time - 0.0001))
            
            while time.monotonic() < next_sample_time:
                pass
            
            velocity_sample = [0.0] * len(CHANNELS)
            
            for i, (channel_code, channel_idx) in enumerate(CHANNELS.items()):
                voltage = read_adc_channel(channel_idx)
                
                if voltage is None:
                    break
                
                velocity_sample[channel_idx] = voltage / GEOPHONE_SENSITIVITY[channel_code]

            if not acquiring:
                break
            
            for channel_idx in range(len(CHANNELS)):
                buffer[channel_idx, buffer_index] = velocity_sample[channel_idx]

            buffer_index = (buffer_index + 1) % buffer_size
            
            with shared_current_index.get_lock():
                shared_current_index.value = buffer_index

            next_sample_time += sampling_interval

        except Exception as e:
            logging.error(f"Error in acquisition worker: {e}", exc_info=True)
            time.sleep(0.5)

    logging.info("Acquisition worker stopping.")
    existing_shm.close()

def saving_worker(sampling_rate, buffer_size, shm_name, shm_shape, save_interval_seconds, shared_current_index):
    global acquiring
    
    existing_shm = SharedMemory(name=shm_name)
    buffer = np.ndarray(shm_shape, dtype=np.float32, buffer=existing_shm.buf)

    accumulated_data = {channel_code: deque() for channel_code in CHANNELS.keys()}
    
    saving_worker_read_pointer = 0

    target_samples_per_segment = int(save_interval_seconds * sampling_rate)
    

    initial_start_time = UTCDateTime.now()
    segment_start_time = UTCDateTime(
        (initial_start_time.timestamp // save_interval_seconds) * save_interval_seconds
    )

#    logging.info(f"Saving worker started with target save interval {save_interval_seconds} seconds ({target_samples_per_segment} samples).")

    while acquiring:
        try:
            # Get the current write index from the acquisition process
            with shared_current_index.get_lock():
                current_acquisition_index = shared_current_index.value

            # Calculate new samples available since last read by saving_worker
            if current_acquisition_index >= saving_worker_read_pointer:
                samples_to_read = current_acquisition_index - saving_worker_read_pointer
            else: # Wrap-around occurred
                samples_to_read = (buffer_size - saving_worker_read_pointer) + current_acquisition_index
            
            if samples_to_read > 0:
                temp_data_snapshot = buffer.copy() # Get a consistent snapshot
                
                # Extract and append new samples to internal deques
                for channel_code, channel_idx in CHANNELS.items():
                    start_idx = saving_worker_read_pointer
                    end_idx = (saving_worker_read_pointer + samples_to_read) % buffer_size

                    if start_idx < end_idx: # No wrap-around in this read chunk
                        new_samples = temp_data_snapshot[channel_idx, start_idx:end_idx]
                    else: # Wrap-around in this read chunk
                        new_samples = np.concatenate((temp_data_snapshot[channel_idx, start_idx:],
                                                      temp_data_snapshot[channel_idx, :end_idx]))
                    accumulated_data[channel_code].extend(new_samples)
                
                # Update saving_worker's read pointer
                saving_worker_read_pointer = current_acquisition_index

            # Check if enough data has accumulated in internal buffers to save a segment
            # We check the length of the EHZ channel as a reference
            if len(accumulated_data["EHZ"]) >= target_samples_per_segment:
                # Extract a segment from the beginning of the accumulation buffer
                data_to_save = {}
                for channel_code in CHANNELS.keys():
                    segment = [accumulated_data[channel_code].popleft() for _ in range(target_samples_per_segment)]
                    data_to_save[channel_code] = segment

                # The start time for this segment is based on the current segment_start_time
                # (which gets updated by the previous segment's end time)
                # It's crucial this start time corresponds to the *first sample* in `segment`.
                
                # If this is the very first segment, its start time might be different from the 'aligned' segment_start_time.
                # However, for continuous segments, each segment_start_time will be (previous_segment_start_time + save_interval_seconds)
                
                # Generate filename
                # Use segment_start_time to name the file, which should be aligned or adjusted
                start_str = segment_start_time.strftime("%Y_%m_%d_%H%M%S")
                # Calculate the end time of the *saved* segment for the filename
                segment_end_time = UTCDateTime(segment_start_time.timestamp + save_interval_seconds)
                end_str = segment_end_time.strftime("%H%M%S")

                today = segment_start_time.strftime("%Y_%m_%d")
                vel_dir = os.path.join(today, "vel")
                os.makedirs(vel_dir, exist_ok=True)
                
                vel_file = os.path.join(vel_dir, f"{start_str}_{end_str}_vel.mseed")
                
                save_to_miniseed(data_to_save, segment_start_time, sampling_rate, vel_file)
                
                # Update the segment_start_time for the next segment
                segment_start_time = segment_end_time
#            else:
##                logging.info(f"Saving worker: Accumulating data. Current buffer has {len(accumulated_data['EHZ'])}/{target_samples_per_segment} samples.")

            time.sleep(1) # Check for new data every second

        except Exception as e:
            logging.error(f"Error in saving worker: {e}", exc_info=True)
            time.sleep(5)

    # Handle any remaining data in the accumulation buffers when stopping
    logging.info("Saving worker stopping. Checking for remaining data...")
    if len(accumulated_data["EHZ"]) > 0:
        logging.info(f"Saving remaining {len(accumulated_data['EHZ'])} samples.")
        data_to_save_final = {}
        actual_samples_remaining = len(accumulated_data["EHZ"])
        actual_duration_remaining = actual_samples_remaining / sampling_rate

        for channel_code in CHANNELS.keys():
            data_to_save_final[channel_code] = list(accumulated_data[channel_code])

        # Use the segment_start_time from the last full segment, or now if no full segments were saved
        final_segment_start_time = segment_start_time
        if initial_start_time.timestamp > segment_start_time.timestamp and len(accumulated_data["EHZ"]) == actual_samples_remaining:
            # This implies no full segments were saved, so the remaining data started at initial_start_time
            final_segment_start_time = initial_start_time

        final_segment_end_time = UTCDateTime(final_segment_start_time.timestamp + actual_duration_remaining)

        today = final_segment_start_time.strftime("%Y_%m_%d")
        vel_dir = os.path.join(today, "vel")
        os.makedirs(vel_dir, exist_ok=True)
        
        start_str = final_segment_start_time.strftime("%Y_%m_%d_%H%M%S")
        end_str = final_segment_end_time.strftime("%H%M%S")
        vel_file = os.path.join(vel_dir, f"{start_str}_{end_str}_remaining_vel.mseed") # Mark as remaining

        save_to_miniseed(data_to_save_final, final_segment_start_time, sampling_rate, vel_file)
        
    existing_shm.close()

# Visualization worker 
def visualization_worker(shm_name, shm_shape, sampling_rate, shared_current_index):
    global acquiring
    
    existing_shm = SharedMemory(name=shm_name)
    buffer = np.ndarray(shm_shape, dtype=np.float32, buffer=existing_shm.buf)
    
    plt.style.use('dark_background')
    fig, axes = plt.subplots(len(CHANNELS), 1, figsize=(10, 8), sharex=True)
    if len(CHANNELS) == 1:
        axes = [axes]

    lines = [ax.plot([], [])[0] for ax in axes]
    
    display_time_seconds = 10
    samples_to_display = int(display_time_seconds * sampling_rate)

    for ax, (channel_code, channel_idx) in zip(axes, CHANNELS.items()):
        ax.set_title(channel_code)
        ax.set_ylabel('Velocity (m/s)')
        ax.set_xlim(0, display_time_seconds)
        ax.set_ylim(-0.001, 0.001)

    axes[-1].set_xlabel('Time (s)')

    logging.info("Visualization worker started.")

    def update(frame):
        if not acquiring:
            plt.close(fig)
            return lines

        with shared_current_index.get_lock():
            current_write_index = shared_current_index.value

        current_data_snapshot = buffer.copy()

        for i, (channel_code, channel_idx) in enumerate(CHANNELS.items()):
            if current_write_index == 0 and buffer[channel_idx, 0] == 0:
                actual_samples_available = 0
            else:
                actual_samples_available = min(
                    samples_to_display,
                    current_write_index if current_write_index != 0 else buffer_size
                )

            if actual_samples_available == 0:
                y_data = np.array([])
            elif current_write_index >= actual_samples_available:
                start_idx_display = current_write_index - actual_samples_available
                y_data = current_data_snapshot[channel_idx, start_idx_display : current_write_index]
            else:
                samples_from_end = actual_samples_available - current_write_index
                y_data = np.concatenate((current_data_snapshot[channel_idx, buffer_size - samples_from_end : buffer_size],
                                         current_data_snapshot[channel_idx, 0 : current_write_index]))
            
            x_data = np.arange(len(y_data)) / sampling_rate
            
            lines[i].set_data(x_data, y_data)
            
            axes[i].relim()
            axes[i].autoscale_view(True, False, True)

        return lines

    ani = FuncAnimation(fig, update, interval=200, blit=False, cache_frame_data=False)
    plt.show()
    
    logging.info("Visualization worker stopping.")
    existing_shm.close()

# --- Main Execution  ---
def main():
    sampling_rate = 100
    buffer_duration_minutes = 5
    buffer_size = int(sampling_rate * buffer_duration_minutes * 60)
    
    try:
        configure_adc(gain=0x00, data_rate=0x82)

        signal.signal(signal.SIGINT, signal_handler)

        shm_shape = (len(CHANNELS), buffer_size)
        logging.info(f"Creating shared memory with shape: {shm_shape}")
        shm = SharedMemory(create=True, size=np.prod(shm_shape) * np.float32().itemsize)
        
        temp_buffer_init = np.ndarray(shm_shape, dtype=np.float32, buffer=shm.buf)
        temp_buffer_init.fill(0)
        del temp_buffer_init

        current_acquisition_index = multiprocessing.sharedctypes.Value('i', 0)

        acquisition_process = multiprocessing.Process(
            target=acquisition_worker, args=(sampling_rate, buffer_size, shm.name, shm_shape, current_acquisition_index)
        )
        saving_process = multiprocessing.Process(
            target=saving_worker, args=(sampling_rate, buffer_size, shm.name, shm_shape, 300, current_acquisition_index)
        )
        visualization_process = multiprocessing.Process(
            target=visualization_worker, args=(shm.name, shm_shape, sampling_rate, current_acquisition_index)
        )
        
        acquisition_process.start()
        saving_process.start()
        visualization_process.start()

        logging.info("Starting sensor data acquisition, saving, and visualization.")

        acquisition_process.join()
        saving_process.join()
        visualization_process.join()

        logging.info("All processes stopped.")

    except Exception as e:
        logging.error(f"An error occurred in main: {e}", exc_info=True)
    finally:
        logging.info("Cleaning up resources...")
        spi.close()
        GPIO.cleanup()
        if 'shm' in locals() and shm:
            try:
                shm.close()
                shm.unlink()
                logging.info("Shared memory unlinked.")
            except FileNotFoundError:
                logging.warning("Shared memory already unlinked or not found.")
            except Exception as e:
                logging.error(f"Error unlinking shared memory: {e}")

if __name__ == "__main__":
    main()

To find out if you are configuring and addressing the inputs correctly replace one of the inputs with a low voltage battery (eg a standard AA) and confirm that you get the reading that you expect for that battery (eg 1.2V). Then re-arrange the inputs to confirm that the reading moves with the changes, To find out whether or not the reading you are getting from the sensors is correct, consult the reference documentation for the sensors to determine the signal voltage level that is specified for their current state (ie supply voltage and (presumably) passive).

2 Likes

Thanks for your comment, i find out the ADC is not updating the acquisition parameters
Read STATUS: 0x00, MUX: 0x00, ADCON: 0x00, DRATE: 0x00 to change the sampling frequency, and to get the values in each differential input

1 Like

Hi @Mauro296964,

Thanks for sharing the code, a few of the time.sleep() values are extremely short (e.g., 10 microseconds), and on Raspberry Pi these may not reliably delay at all. Try increasing the sleep() values in the ADC communication steps to at least 0.0001 or even 0.001 to give the SPI commands proper spacing. Start there and see if it improves behaviour, especially around DRDY getting stuck high.

Also, Jeff’s suggestion is a great one: try replacing one of the sensor inputs with a known voltage source, like a 1.2V AA battery. If the ADC reads that correctly, it confirms your input channel configuration and voltage scaling are working as expected. Then swap that input to a different channel and confirm the reading follows, that’ll help verify whether your mux settings and wiring are correct.

Lastly, if you can find or build a basic test script specifically for your ADC (e.g. a loop that just configures and reads a single channel), it’s a good way to isolate issues in a simpler environment.

Let us know how you go, happy to help dig deeper!