Blog

  • Home Assistant, Matter over Wifi, and isolated VLANs

    I wanted to have a setup where I could have my matter-over-wifi devices connected to Home Assistant, while also having them isolated into their own VLAN, along with the matter server — and nothing else. This turned out to be more complicated than I expected.

    In case anyone out there finds it useful, here’s how I made it work.

    You’ll need the following:

    • matter-server installed somewhere HA can talk to it
    • The string contained in the QR code of the device
    • A python script to decode this string
    • An installation of chip-tool on a machine with bluetooth
    • Some way of talking to matter-server with a websocket tool – I used websocat

    For troubleshooting, it will be handy to be able to run a packet capture on the matter VLAN, probably on the kubernetes node where matter-server is running.

    Matter server

    You need run run your matter server somewhere HA can talk to it, and it needs to be able to communicate with the matter-over-wifi devices.

    In my setup, that is done by having matter-server in its own kubernetes namespace, with a service exposing port 5580 to the rest of the cluster (so HA can access it), and using multus to give the pod a second interface directly connected to the VLAN the matter devices are assigned to. I also use networkPolicy to make sure that the only thing talking to the matter server namespace is HA.

    The arguments I’m supplying to the matter server process are the following:

    "--storage-path", "/data", "--paa-root-cert-dir", "/data/credentials", "--log-level", "debug", "--primary-interface", "net1"
    

    The --primary-interface argument is described as the “Primary network interface for link-local addresses (optional).” and net1 is the multus interface for talking to the matter devices.

    Decoding the QR data

    The data in the QR code should be something of the form MT:Yxxxxxxxxxxxxxxxxxx and it includes the information needed for both phases of the commissioning process.

    The script you need to decode this is buried in the git repo at https://github.com/project-chip/connectedhomeip — you can check out the whole thing, or you can just extract the SetupPayload.py and Base38.py files under src/setup_payload/python

    Once you have the script (and have built a venv or installed enough packages for all of the things it uses) you can run it as follows:

    python3 SetupPayload.py parse MT:Yxxxxxxxxxxxxxxxxxx

    Which should produce output something like:

    Parsing payload: MT:Yxxxxxxxxxxxxxxxxxx
    Flow                     :0
    Pincode                  :nnnnnnnn
    Short Discriminator      :x
    Long Discriminator       :yyy
    ...
    

    The important bits here are the Pincode and Long Discriminator fields.

    Joining the matter device to your wifi network

    For this, you need a machine with the bluetooth stack and chip-tool installed. I am assuming you already have your network configured in such a way that when the device joins the wireless network, it will end up connected to the isolated matter VLAN.

    You will want to get a copy of the production PAA certificates used to verify matter devices. They can be found in the same github repo as above, under credentials/production/paa-root-certs

    Once you have them, you need to put them where chip-tool can get at them. If you are using the snap installation, this can be a real problem since it doesn’t have access to most of the filesystem. In the end, I had to copy them into /tmp/snap-private-tmp/snap.chip-tool/tmp/paa (which then allowed them to appear at /tmp/paa to the command). If you are not using the snap installation you can probably put them anywhere you want, adjust the following command appropriately.

    Run the following command:

    chip-tool pairing ble-wifi 0x0001 <ssid> <password> <pincode> <long_discriminator> --paa-trust-store-path /tmp/paa

    where ssid and password are the credentials for your wifi network, and pincode and long_discriminator are from the decoded QR code. The 0x0001 is just an arbitrary ID number you are assigning to the device.

    Note that this command will ultimately fail, because you won’t have a connection to the matter VLAN and thus it won’t see the mDNS traffic it expects. However, it should have already succeeded in passing the wifi credentials to the matter device, which is all we need it to do.

    Persuade matter-server to commission the device

    At this point, we should have both the matter device and the matter-server able to talk to each other in the matter VLAN. We now need to prod the server into commissioning the device, by connecting to the websocket and injecting some json.

    In my scenario, that involves running up a websocat container in the same kubernetes namespace as the Home Assistant pod (If you aren’t using network policy you probably don’t need to worry about namespaces or labels):

    kubectl run [-n <ha-namespace>] websocat --image=ghcr.io/vi/websocat:nightly [--labels='foo=bar'] -it --rm -- ws://<matter-service-name>.<matter-namespace>.svc.cluster.local:5580/ws

    Where ha-namespace is the kubernetes namespace your HA pod is in, the labels are anything you need to satisfy your network policy ingress rules, and matter-service-name and matter-namespace are the name of the service and the namespace associated with your matter-server pod.

    Once that has started (you may or may not see anything in the terminal, but you should see a connection in the debug log of matter-server), you should paste in a single line containing the following:

    { "message_id": "1", "command": "commission_with_code", "args": { "code": "MT:Yxxxxxxxxxxxxxxxxxx", "network_only": true } }

    where the code is the original data from the QR code.

    At this point, you may end up with a message of the form

      { "message_id": "1",   "error_code": 1,   "details": "Commission with code failed for node 5." }

    in which case, wait a minute or so and try pasting the command in again. This generally means that your matter device wasn’t responding, or didn’t respond in the correct way. It seems to sometimes take several minutes for a matter device to become ready after it joins the wifi network, and I’ve had at least one case where I had to reset it to factory defaults and re-do the chip-tool phase before it would work.

    If it does work, you will get a rather large blob of json which starts something like this:

    { "message_id": "1", "result": { "node_id": 12, "date_commissioned": "2025-09-30T08:25:13.952835"....

    and then you should be able to see your device under the integration in Home Assistant.

    If it doesn’t work after 10 minutes or so I would suggest attaching tcpdump to the matter VLAN and see if you are getting any traffic – you would expect DHCP requests (which in my setup at least won’t be getting answered, but that is OK) and mDNS traffic on port 5353. If you don’t see traffic, maybe your matter device hasn’t actually joined your wifi.

    Good luck.

    Most of the info which actually got this working from me comes from https://community.home-assistant.io/t/commissioning-matter-devices-with-the-matter-server-without-smartphone-and-or-matter-add-on/905051/1, https://project-chip.github.io/connectedhomeip-doc/development_controllers/chip-tool/chip_tool_guide.html, and https://github.com/matter-js/python-matter-server/blob/main/docs/websockets_api.md

  • Cisco 7965 IP phones and TP-Link Gigabit Smart Switches

    I had some interesting times trying to get my Cisco 7965 working with a TP-Link SG2210P swtich.

    The way I want this to work is have all ports on the switch configured as access ports on my normal VLAN, but have the phone automatically run on a different VLAN, and be able to use the pass-through port on the phone for another device on the normal VLAN (the way they would normally be configured in an all-cisco environment). I also didn’t want non-phone devices to be able to access the voice VLAN.

    At first all seemed to work fine, I configured the OUI filter entry to match the mac address prefix of the phone (the default Cisco rule it comes with doesn’t cover the phone I have). I also configured the switch-wide voice vlan settings, and set the “voice vlan mode” of all the access ports to “auto”.

    However, I later noticed that the phone was trying to get a DHCP address on my normal VLAN instead of the voice one. I think that what had happened is that I’d switched on support for LLDP and LLDP-MED, and now the phone was being told by the switch to use the voice vlan, and was also being told that the voice vlan wasn’t present on the port, thus confusing the phone thoroughly. After lots of messing around I have come to the following conclusion:

    • Do enable LLDP and LLDP-MED
    • Set the “voice vlan mode” to “manual” for the access ports.
    • Explicitly allow the voice vlan (tagged) on the access ports.
    • Use the “voice vlan security” feature to prevent non-phone devices getting onto the voice vlan (this uses the OUI filter entries again).

    The end result should look like this:

    sw1-office#show run int g 1/0/5
    interface gigabitEthernet 1/0/5
      switchport general allowed vlan <your data vlan id> untagged
      switchport general allowed vlan <your voice vlan id> tagged
      switchport pvid <your data vlan id>
      no switchport general allowed vlan 1
      switchport voice vlan mode manual
      switchport voice vlan security
      lldp med-status
    
  • A minor case mod for the CoolerMaster “HAF Stacker 915R”

    Recently I bought a CoolerMaster “HAF Stacker 915R” case for a new mini-ITX server I’m building, based around the ASRock AM1H-ITX board. I bought mostly because it was cheap, but also because it claimed you could mount a lot of fans (6x 120mm or 4x 140mm) on the side panels. What I didn’t realize until I was installing drives in it was that you can’t have the side panel fans and the drive cage installed at the same time:

    No room on this side

    No room on this side either

    As delivered, the case has the drive bay mounted at the front cooled by a 92mm fan in the front panel – but that didn’t work for me, partly because the fan cable for the front fan doesn’t reach back as far as the motherboard! In addition, the motherboard I am using has the option of being driven by a 19V external DC power supply, at which point it supplies power to the drives from a SATA power socket on the board – and that cable couldn’t reach the drive bays either.

    So I decided that instead of mounting the drives across the case, I’d rotate the bay 90% and run them front-to-back. 4 small holes later..

    4 small holes drilled

    Now there is room for the fan at the side:

    Plenty of room!

    You can still get the drives in and out of the bays without having to remove it from the case:

    The tray fits between the front panel and the cage

    There’s plenty of clearance for the fan:

    There is only just clearance between the front edge of the motherboard and the rear of the drive. Note that because I’m using the DC input on this board, I don’t need to use the ATX power connector, so potentially fouling that isn’t an issue for this build.

    I’m now much more confident that the drives will get the cooling they need.

  • Building root filesystems for the Raspberry Pi

    Rasbian on the Raspberry Pi is great, but the official image is large and includes all sorts of stuff I don’t need (my Pis don’t generally have screens as I use them for playing music, or being GPS NTP time servers, or collecting data on my power usage – not things which need, for example, a screen and a gui!). I did manage to cobble together a base install using the Debian installer on a Pi, but it took a loooong time, and cloning this image every time I want to do a new one is a bit of a pain; for one thing the image is quite old now so apt-get has to update a lot of packages to bring it up to current, and for another the machines all get the same ssh keys, and the filesystems all have the same ‘unique’ IDs (which tends to confuse things if you simultaneously plug 2 of the resulting SD cards into the same PC!).

    So I’ve been mucking around with a better way to generate basic Rasbian installs for the Pi. I’ve discovered that the combination of multistrap and qemu-static allows the building of a complete up-to-date installation tree on my x86-64 Ubuntu desktop in a tiny fraction of the time it would take to do the same thing on a Pi. For my base install (which is indeed very basic) my machine takes about 5 minutes (with the packages files already in my apt-cacher-ng cache). Sometimes the longest step is copying all the data to the SD card!

    One of the nice things about using multistrap to do this is the ability to make cascaded configurations – so I can define a base configuration which works for me, and then have other configurations which build on top of it for specific applications. If I make improvements to the base, then regenerating one of the cascaded configurations will incorporate the improvements automatically. It also means I don’t have to remember or document what I did to the base image to get the application-specific image, because it is all there in the configuration file and associated scripts.

    The resulting trees are still proper Rasbian installs and can be updated and managed with apt-get etc just like a normal install.

    I’ve put a small collection of multistrap configuration bits and attached shell scripts up on github if anyone else wants to have a play with it. It includes one configuration file for a base install, and one for a customized install including MPD. There’s a handful of shell scripts which do most of the work beyond getting the packages extracted – these will almost certainly need customizing for your environment. You will definitely want to read the README. Oh, and you’ll need a Debian-derived (eg Ubuntu) machine to do the work, since multistrap relies on apt to do all the package work.

  • South Australia is on a really daft timezone

    There’s a rather nice article about timezones and how wrong they can be, which shows clearly just how broken the South Australian timezone (+9:30) is. If you look at the beautiful map, you’ll notice that all of South Australia (and Northern Territory if it comes to that) is in red; that is, behind the timezone. At 15° of longitude per hour (360° in 24 hours), the +9:30 timezone would be centred at 142.5° E. South Australia extends from 129° E to 141° E (see wikipedia).

    If you look closely at the map, you’ll see part of Indonesia (West Papua) which is on the +9:00 timezone. You can see that the white coloured section (the centre of the timezone, 135° E) is roughly in line with the centre of South Australia. Surely this would make more sense? All that broken software which assumes that timezone offsets from GMT are always a whole number of hours would just work!!

    (I recently came across broken software which did in fact have non-integer timezones available. Well, sort of. They had a special case for India on +5:30, but that was all. ARRGGHH!).

  • More keyboard controller goodness

    Progress has been made.

    Firstly I’ve learnt to solder TQFP surface mount packages:

    SMT soldering

    And secondly I’ve used the resulting breakout widget to replace the Adafruit module on my breadboard:

    keyboard controller

    Top left you can see the pair of shift registers which drive the keyboard matrix columns through the bank of diodes and the ribbon cables at bottom left, bottom right is the soldered board from the picture above, bottom centre are the LEDs; red ones for numlock, capslock etc, green ones for debugging the state of the device. The ribbon cable off the bottom right of the board connects to the row outputs of the keyboard matrix.

    Next step: design a board.

  • Keyboard controller project update

    It has been nearly two years since I mentioned the USB keyboard controller project. For most of that time, my primary keyboard at home has been driven by one or another bread-boarded incarnation of it. A few major points on the hardware side:

    • I’ve given up on the PIC and gone back to AVR. I ended up with some bugs which I couldn’t resolve, and I’m not sure they weren’t in the USB stack; I also wasn’t at all sure what the licensing would and wouldn’t allow me to publish.
    • This will mean learning to solder TQFP packages, and getting boards manufactured. Oh well.
    • I’ve decided to target the ATmega32U2 chip – less pins to solder the the U4. It gives up a handful of IO pins, the ADC, 1k of RAM,  and the ability to sense if the USB is plugged in when the device is externally powered. None of which matter for this project.
    • I’m using a pair of 74HC595 shift registers to drive the columns on the keyboard matrix. 16 pins driven for 3 pins on the AVR is a win. It does mean using a diode per column rather than just setting the undriven AVR pins to inputs (to avoid having outputs fight when multiple keys on the same row are held down) but again, I can live with that.
    • Being able to develop on the Adafruit 32U4 board means I have spare outputs for connecting debug LEDs to and more RAM for debug statements.

    The software side has made massive advances over the last 3 days (yay for a long weekend!) and I think is now pretty much final. It presents a dual-interface USB device with a “boot keyboard” interface (all the standard keys) plus a generic HID interface for some some non-standard ones (I’ve assigned one to the “System Sleep” code which does exactly the right thing). Things I’ve learnt:

    • LUFA is awesome. Especially when you consider it was mostly written when the author was a student, and responds to bug requests very quickly (and the ones I found were extremely minor!). I found it so much easier to use than the Microchip USB stack as well. Oh, and the demos are brilliant for getting started.
    • Being able to use GCC is good. I don’t like proprietary development environments.
    • Using the hardware serial port for debugging messages is generally good (and you can run it at silly baudrates like 921600, although I’m not sure if that just ends up meaning bigger gaps between characters on the wire…), BUT:
    • Too much debugging output can cause oddness. I spent several hours chasing a “bug” where Linux would often wait 5 seconds between finding the first interface and the second one. Wireshark was showing “malformed packet” coming back from the device. I took out one debug statement which fired every time the device received a Control Request – and the problem vanished.
    • Oh, wireshark can dump the USB bus. Really handy!
    • Git is good. Should have started using it ages ago (I’ve been using RCS. Clearly I’m too old 🙂 )

    Next steps?

    • I’ve ordered some 32U2 chips. I’ve also got some Adafruit 32 pin TQFP breakout boards. Hopefully I should end up with one successfully soldered to the other and integrated into the breadboard in place of the current 32U4 board.
    • Experiment with a bootloader. I think I’ll want one that only triggers if you hold a pin in a specific state when plugging the device in. Working with the bare chip I’ll have access to the HWB line, which I don’t on the board I’m using now.
    • Start working on a board layout in something like Eagle or DesignSpark PCB. Don’t think I’m masochistic enough for GEDA. I’ll see how I go soldering the TQFP chips before deciding if I go SOIC for the shift registers (and maybe a diode pack?).

    If I can I’ll lay out the board such that it would make a good development board for other purposes. I might add another optional shift register for driving the LEDs – this keyboard only has 3, but even fairly normal keyboards can have 5, and the kernel source seems to recognize another 6 beyond that…

    If you are crazy enough to want to build something similar, the code is up on github. The readme should give you a rough idea of how to set up the hardware – main thing to remember is that the scanning is done active low so I can utilize the pull-up resistors built into the chip rather than having to supply pull-down resistors, so the diodes go in backwards to what you might expect!

     

  • Stephenson screen

    A first attempt at a Stephenson screen for the outdoor temperature sensor. It is just a section of PVC pipe with a cowl on it to keep the rain (mostly) out, with a thinner pipe suspended on the inside, with the sensor suspended within that, near the lower end. The idea being that the inner pipe won’t see direct sunlight, and air between the pipes heated by the sun on the outer pipe will vent up and out, drawing fresh air in the bottom.

    I’m currently testing it in parallel with the sensor at the back of the house (which gets badly sun-affected in the mid morning).

    image

  • Finalized currentcost interface for rPi

    Having breadboarded it a couple of days ago, I’ve now finalized the serial interface between the Raspberry Pi and the Currentcost cc128, using some stripboard, dual header socket, CD4049, and a couple of bits of wire. Pretty straightforward; I removed a couple of pins from the header socket so I didn’t have to bridge from the outside of the socket back into the middle of the board, and made sure that all unused gates have their outputs floating and their inputs tied to something (in one case I’ve actually fed the input of a spare gate from the output fed to the rPi RX line; yes it will slightly increase the current consumption (as I’ve got an extra gate switching instead of staying static), but the effect will be tiny and it was easier than cutting the trace and fixing it all up with wires). The stripboard could have had the outer two rows of holes trimmed off but it doesn’t interfere with anything so I didn’t bother. The LEGO case has been improved now that I don’t have wires out the side (and has a transparent block near the LEDs on the rPi – works very well).

    Parts
    Note the two missing pins in the header. There's a missing cut on the stripboard, and one cut in the wrong place.

    Top view
    Top view

    Bottom view
    Bottom view, with missing cuts added and wrong cut bridged; yes my soldering is awful.

    In situ
    With cc128 meter and rj45 connection

    Boxed and ready to go
    Boxed and ready to go

     

    Machine is now under the bed driving the (USB) bedroom speakers using shairport and feeding power data to my main server over MQTT.

  • Woot! cc128 -> rPi -> mqtt!

    Got my 2nd raspberry pi today. Coinciding with the discovery that pin 10 on the arduino board I was going to use to submit data from the currentcost cc128 meter to mqtt seems to have a duff pin 10 (which makes it rather hard to use the ethernet shield), I decided to see if I could instead have the 3.3v RS232 from the cc128 drive the 3.3v RS232 on the raspberry pi (through 2 gates of a CD4049 running from the rPi 3.3V rail to protect the rPi if something went wrong). After finding an old floppy cable to rip apart and bodging together a LEGO case (based on the design by Biz but hacked because of the  over-sized connector I’ve used for the i/o port), it was a matter of “apt-get install mosquitto-clients python-serial”, copy the script over from the server currently doing the job (which has a long unreliable USB cable to the cc128), hack it to send the data to the server rather than localhost, and voila! works first time.

    Disturbingly this actually costs less than an arduino + ethernet shield. The form factor is a little more awkward though (ports at both ends of the board, no screw holes), so getting it into a case along with other bits and pieces will be somewhat more awkward; ideally it would be good to have a tall case with enough room in the upper part to hold a pcb for additional components.

    rPi + cc128