An Investigation of My Home Thermostat

An Investigation of My Home ThermostatSteven YueBlockedUnblockFollowFollowingMar 23PrefaceI live in one of those large apartment complexes in San Francisco.

The complex was recently built, and I am the first tenant in my current apartment.

Walking down the hallway inside the complex, I always get this feeling of a modern, newly-furnished hotel.

Photo by Daniel Plemott on UnsplashIncidentThere was just one small thing that bothered me.

I always set the thermostat to be ~78 degrees and on auto mode.

However, despite that the screen shows that my room is at 78 degrees, I feel cold every once a while.

Sometimes, I also get headaches because of the “coldness”.

At first, I thought it was because I stayed in warm condition for too long and my senses about the environment are getting distorted.

But one time when my girlfriend visited my apartment, and she complained about suddenly feeling cold as well.

I had to turn up the thermostat to 80 degrees to heat up the room again, but eventually, we all started to feel cold again.

This caught my attention.

It couldn’t be possible that both of us are losing our senses to the environment.

I also ruled out the possibility that the thermostat was broken — I just moved in and it was a brand new apartment complex.

Plus, I only felt cold for a period of time, and the room got warm again (or, at least, I stopped feeling cold).

InvestigationNow this problem had piqued my interest.

I decided to figure out why.

The easiest way to check whether the room temperature matched the display temperature on the thermostat was to get an actual temperature sensor and measure the room temperature.

Luckily, I had two spare temperature sensors from Samsara, my previous employer.

BLE Temperature/Humidity Sensor (From Samsara)Samsara is a unicorn IoT startup focusing on building connected experience for industrial applications by deploying real-time IoT hardware into the field.

A typical application is fleet-tracking, where customers will install Samsara’s latest Vehicle Gateway into their operating trucks, and the Gateway will immediately gain insight into the vehicle’s operation status.

By aggregating all the edge data in the cloud, Samsara provides a real-time dashboard that gives customer unprecedented visibility into their daily operation.

That was just a short elevator pitch for the company.

Returning to my thermostat problem — I found two Samsara temperature/humidity sensors lying on my desk with an LCD display showing the current temperature.

These sensor modules are used to collect environmental data from places like factories or trailers.

The local gateways will collect data from these sensing modules through Bluetooth and send up information to the cloud.

These modules are pretty useful because they can directly display the temperature/humidity on the LCD display.

However, I would actually need a Samsara gateway to collect these data in digital form through Bluetooth.

Unfortunately, I surrendered my gateway after I left Samsara, and I didn’t have access to Samsara’s cloud platform anymore.

Therefore, these temperature modules are no different from off-the-shelf temperature sensors that could tell me the current temperature.

I took the reading from the sensor modules — both of them were reporting 78 degrees (and my thermostat was set to 78).

So it appeared to be normal.

Soon I noticed that the display on the sensor module updates very infrequently (maybe every few minutes), and I couldn’t afford staring at the sensor module for an hour just to read the temperature changes in my room.

It would be nice if I could have my own sensor module collecting data and sending up the data to my own cloud, I thought.

Then I shall have the freedom to process the data remotely on my laptop and find out the changes in temperature.

Therefore, I decided to build everything on my own.

Photo by Chris Ried on UnsplashBuilding the SystemI sat down and drafted the rough block diagram of hardware and software components.

Block diagramBasically, I wanted to build a system that collects sensor data (temperature, in this case) and uploads the data to the cloud.

I also wanted to have some sort of front-end client view of these environment data trends in real-time.

Now I’m going to explain each component of the system and how I achieved making them.

SensorsIn order to collect environmental data, we’ll need sensors.

I quickly browsed Adafruit to look for some environment sensors.

Since I was building a sensors platform for my home, I might as well get a few more types of sensors to play with.

After some researching, I settled down on these sensors.

Adafruit Si7021 Temperature & Humidity Sensor Breakout BoardAdafruit Industries, Unique & fun DIY electronics and kits Adafruit Si7021 Temperature & Humidity Sensor Breakout…www.

adafruit.

comAdafruit TSL2591 High Dynamic Range Digital Light SensorAdafruit Industries, Unique & fun DIY electronics and kits Adafruit TSL2591 High Dynamic Range Digital Light Sensor…www.

adafruit.

comNormally temperature and humidity sensors are integrated onto the same chip/board because people usually collect those two data together.

I got a light sensor that is capable of measuring the lightness of my apartment.

All of these were just standard 5V compliant sensor breakout boards that communicate over I2C.

Before I placed the order, I also grabbed a few jumper cables and breadboards for rapid prototyping.

It took ~3 days for the parts to arrive, and then I quickly put them together onto the breadboard.

Sensor breakout configurationGatewaysThe picture above shows the entire configuration of the system — aside from the sensors, there is also a Raspberry Pi connected to the circuit.

The Raspberry Pi here is going to act as a gateway to bridge between sensors and the cloud — to forward sensor readings to the backend.

On the Raspberry Pi, I had a few demo applications that were capable of enabling the I2C bus, talking to the sensors and reading out corresponding data.

Most of the applications were written in Python, and one specific application was written in C.

Once I connected the sensors to the Raspberry Pi and validated every single sensor using the demo application, the next step was to make one unified application that could sample all the sensors and send up the data.

Instead of rewriting everything in Python, I decided to leverage multiprocessing and fire up all the demo applications at the same time, and use some sort of communication channel to collect the sensor data from those applications.

Setting up multiprocessing was easy, I just needed to wrap whatever process in a python file, and use a master python script to create each process.

Communicating between processes was hard.

After balancing a few messaging options, I decided to go with ZeroMQ as a messaging framework between the processes.

from multiprocessing import Processimport sensors.

lum_sensorimport sensors.

temp_sensorimport sensors.

uploaderdef main(): plist = []# Uploader plist.

append(Process(target=sensors.

uploader.

main))# Temp sensor plist.

append(Process(target=sensors.

temp_sensor.

main, args=(60,)))# Lum sensor plist.

append(Process(target=sensors.

lum_sensor.

main, args=(60,)))for p in plist: p.

start() plist[-1].

join()# Terminate all of them for p in plist: p.

terminate()if __name__ == '__main__': main()ZeroMQ supports a few messaging patterns.

Typical ones include Client/Server — where multiple clients can communicate with one server to exchange information, and Producer/Consumer — where upstream services pass information to the downstream services.

In this case, I chose the Client/Server configuration, so all my sensor data collecting processes could be the client, reporting the measured sensor data to the Uploader process, which would upload the data to the backend.

Sending up dataBefore worrying about sending up data, I first defined all my message types in Protobuf.

Protobuf is Google’s new cross-platform type-safe protocol definition language that we can use to define the data structure of messages that need to be shared across different platforms and systems.

You just need to define the struct in a .

proto file, and the Protobuf compilers can compile it into any of the commonly used languages.

The compiled struct will also contain helpful methods such as serializing to a binary string or reconstructing itself from it.

Basically, you can think of Protobuf as a more advanced and type-safe version of JSON.

Once we had proto messages in place, communication became much more straight-forward.

I just needed to make sure that I can deliver a binary string from one place to another — then Protobuf can help me to parse it into whatever data structure that I defined.

I had multiple options to connect my local gateways to the cloud.

Using an HTTP Web server: I could simply open a web server in the backend that has a data endpoint that all my devices would hit.

Then all my gateways can simply embed the Protobuf binary message in the POST form payload and send it up to the backend.

This was honestly the easiest way I could pursue, but it just didn’t sound that cool to me.

Using a direct TCP connection: I wrote a TCP server that could parse messages in OS class back in college.

Maybe I could resurrect that server and use it to listen to the messages from my gateways.

However, if I were to use TCP, I would lose all the benefits from HTTP (all the request payload structure, and maintaining connectivity).

Also, I would need to maintain all the connections and handling all kinds of corrupted states with TCP.

Therefore, I figured that this idea was not the best for me.

Using UDP connection: Nope.

Not gonna do that.

Using MQTT: MQTT is a light-weight messaging protocol for IoT devices.

I used MQTT in one of the projects before.

It’s basically a giant public chatroom of IoT devices hosted by these MQTT “broker” services.

When you want to post a message into the chatroom, you need to prefix your message with a topic.

You can subscribe to some certain topics to receive any messages on that topic.

You can also subscribe to a prefix to some topic — for example, if you subscribe to /device/gateways/#, and all the gateways will publish their messages in /device/gateways/<gateway-id>, you can see all the messages published by all the gateways automatically.

After comparing a few options, I decided to go with MQTT.

I assigned a gateway-id for every Raspberry Pi, and they would push the binary strings to their specific gateway-id suffixed topic.

I also needed to run an MQTT broker service that could provide this chatroom.

There were a few options online and I found a containerized one so I could easily spin up a broker service on AWS.

On the other end of all of this, I also needed some backend services that would connect to the chatroom and subscribe to the device message topic.

I did that part in Go, and the code was pretty straight-forward.

package mqttworkerimport ( "log" "os" "strconv" MQTT "github.

com/eclipse/paho.

mqtt.

golang")type MQTTSubMessage struct { Topic string Payload []byte}func Run() { hostname := "broker" port := uint16(1883)log.

Println("Connecting to " + hostname + ":" + strconv.

Itoa(int(port)))opts := MQTT.

NewClientOptions() opts.

AddBroker(hostname + ":" + strconv.

Itoa(int(port))) opts.

SetClientID("mqtt-server")msgChan := make(chan MQTTSubMessage, 10)opts.

SetDefaultPublishHandler(func(client MQTT.

Client, msg MQTT.

Message) { msgChan <- MQTTSubMessage{Topic: msg.

Topic(), Payload: msg.

Payload()} })client := MQTT.

NewClient(opts) if token := client.

Connect(); token.

Wait() && token.

Error() != nil {panic(token.

Error())} defer client.

Disconnect(250) log.

Println("Connected!")if token := client.

Subscribe("device/#", byte(0), nil); token.

Wait() && token.

Error() != nil { log.

Println(token.

Error()) os.

Exit(1) }worker := MQTTWorker{ msgChan: msgChan, }if err := worker.

Run(); err != nil { log.

Println(err) }}The code above basically opens a connection to the MQTT broker and subscribes to the device/# topic prefix.

Whenever it receives a relevant message, it will populate a MQTTSubMessage object and feed it to the MQTTWorker instance.

I will talk about the worker instance in the next section.

Storing dataNow that I already had a working funnel of sensor data, I needed to figure out a way to store all the data points in the cloud.

I decided to do nothing fancy — I simply created a containerized MySQL database that has a gateway_message table.

In the table I would just store the gateway_id of my Raspberry Pi, timestamp of the datapoint, sensor_type that represents the enum of the sensor, and finally binary_message that contains the Protobuf binary string.

Total of 5 columns (4 plus one unique ID column).

The final missing piece before the sensor data could rest in the DB was to connect the MQTT listener service to the database.

In the previous section, I created a MQTTWorker instance which can consume MQTT messages one at a time.

In the worker routine, I simply created a connection to the MySQL database and added in code that would insert a new row into the table when a new message gets consumed.

I spun up the services on my $5 DigitalOcean Droplet instance.

I also configured the Python code on my Raspberry Pi as a system service so it would be restarted every time it crashes.

After a few days, I manually SSH’ed onto my Droplet and inspected the MySQL table.

*************************** 32718.

row ***************************id: 32718gateway_id: 12345678timestamp_ms: 1553374891311data_type: 2binary_data:{?A)-B*************************** 32719.

row ***************************id: 32719gateway_id: 12345678timestamp_ms: 1553374894470data_type: 3binary_data:??$*************************** 32720.

row ***************************id: 32720gateway_id: 12345678timestamp_ms: 1553374953343data_type: 2binary_data:׳A -B*************************** 32721.

row ***************************id: 32721gateway_id: 12345678timestamp_ms: 1553374954536data_type: 3binary_data:??%32721 rows in set (0.

05 sec)Looking good.

Although I had no idea what those binary strings meant, hopefully, Protobuf can help me understand them later.

Querying dataNow I finished building the first half of the system shown in the hand-drawn diagram above.

The second part was going to be user-facing — allowing users to actually load the sensor data and display them to the front-end.

Querying data out from the database sounded pretty straight-forward — I just needed to use the same MySQL driver to perform a filtered SELECT on my sensor message table.

I could also just build a simple Web server that handles HTTP requests and performs those SELECT operations.

Since I was going to build a real-time front-end that could display the sensor data, I decided to go with Thunder.

Thunder is Samsara’s open source framework that enables real-time data querying through GraphQL all the way to a React/JS front-end.

It also does a lot of smart stuff such as struct reflections — basically, I could just create a few Go structs that have the fields with SQL table column names as tags, and Thunder could just SELECT the table for me, and load each row of data into these Go structs.

I’m not going to go too deep into explaining GraphQL, because many of you can find lots of helpful resources online.

Such as ⬇️.

GraphQL: Everything You Need to KnowSo you’ve been constructing and using REST API’s for quite some time now and short while ago started hearing about…medium.

comWith the help of Thunder, I successfully created a GraphQL server that was able to handle GraphQL queries through a live WebSocket connection and fetch corresponding data from my database.

Here is a screenshot of the GraphiQL Explorer that shows me querying temperature data from gateway 12345678.

GraphiQL Explorer ViewLive front-endInside Thunder’s repository, there was also a demo client which was basically a React/JS web app that connects to the GraphQL server.

I took that and modified most part of it except the part that actually handles connection and queries.

I also added in a few useful front-end framework components such as AntD and React Vis.

Real-time front-end viewAbove is a screenshot of my front-end view.

It’s showing humidity, temperature and ambient light readings of my apartment in the past 24 hours.

ResultsI ran my entire system for a few days, and finally, I was ready to figure out what went wrong with my thermostat (almost forgot about the original goal, lol).

Temperature changes in my apartmentHere is a graph of temperature changes in my apartment for the past 24 hours.

As you can see, data points from 0 to 750 looks like sawtooth waves — that’s the thermostat’s closed-loop regulating cycle.

When the temperature reaches 23C, which is roughly 73F, the AC stops working until the temperature drops down to 20C, which is roughly 68F!.That’s why I was feeling cold — because my room was at 68F while my thermostat shows it’s kept at 75F.

I went to work in the morning roughly at data point 800.

As you can see, it was early in the morning and I turned off my AC, so the temperature dropped down in a natural curve.

When the sun came up higher later, my room was exposed in direct sunlight, and the whole room became warmer — also in a very natural curve.

Temperature vs.

 HumidityAlso, another interesting thing that I have noticed — was that if you pay attention to the Temperature and Humidity graphs closely, you can see that they are somehow inversely related.

I think this is how modern ACs work — regulating temperature as well as adjusting the presence of water in the air towards the opposite direction.

Aside from temperature and humidity, I also looked at the ambient light sensor graph.

Ambient Light changes in 24 hoursFrom the graph above, you can roughly see ~12 hours of daylight in my room.

During data points 800~950, I think the sun was high enough so there was direct sunlight beaming into the windows.

From this graph, I can infer the changes in weather and lighting conditions in my room.

I stopped here and ended my investigation.

Photo by João Silas on UnsplashImprovementsMy investigation has been over for a few weeks.

However, I want to do something to wrap-up the project.

The first thing I noticed was that both my MQTT worker service and GraphQL service are written in Go.

I had to compile them into two different binaries because they are a different part of the system.

In order to compile them into two different binaries, I essentially need to create two Go projects.

However, they share a lot of similarities such as they all need to use the same Protobuf library and generated structs when dealing with binary data.

If I’m working with two Go projects, I need to duplicate a lot of stuff that would make things tricky sometimes.

My solution is making both services into the same mono-binary since these services are pretty lightweight.

However, when I run the binary it takes in an argument.

It runs the MQTT worker routine if a –service=mqttworker arg is seen, and –service=gqlserver for GraphQL server.

Therefore, containerizing both services become so much easier — both service can basically use the same container, but with different runtime args.

The second thing I noticed was that it almost took me forever to deploy all the services and start them in the correct order manually.

I had to start the MySQL daemon and then run the migration script.

After all the systems are running, I can finally start the GraphQL server as well as the MQTT worker.

I want to create a script that automates this entire process so I can create/deploy/destroy infrastructures easily.

docker-compose piqued my interest.

When I worked with containerized services, I mainly used docker to build and start/stop the container.

I read about that docker-compose is capable of running pre-defined scripts that choreographs the containers to start and stop in order and with customizable arguments.

After rounds of trials and errors, I ended up with a pretty satisfying script.

version: '3'services: db: image: mysql:5.

6 command: mysqld –default-auth=mysql_native_password volumes: – .

/db/binlog.

cnf:/etc/mysql/conf.

d/binlog.

cnf ports: – "3306:3306" expose: – "3306" env_file: – .

/db/db_env.

env migration: image: mathewhall/mysql_migration volumes: – .

/db/migrations:/docker-entrypoint-migrations.

d links: – db env_file: – .

/db/db_env.

env broker: build: ".

/mosquitto" ports: – "1883:1883" worker: build: ".

/mqtt-server" environment: – SERVICE=mqttworker env_file: – .

/db/db_env.

env gqlserver: build: ".

/mqtt-server" ports: – "3030:3030" environment: – SERVICE=gqlserver env_file: – .

/db/db_env.

env graphiql: build: ".

/graphiql_client" ports: – "3000:3000"With the script, I could basically run the entire infrastructure in one cloud instance.

All I need to do will be cloning the repository, install docker-compose and then do docker-compose up.

The rest is all automated.

PostfaceWith all the improvements in place, I can finally rest now.

It’s been a long journey trying to solve a really simple problem, but I learned a lot about building the end-to-end infrastructure.

I’m currently thinking of adding a few more of these sensor-gateway pairs in my apartment so I can tell the temperature gradient (or difference) between different rooms.

I always feel that my bathroom is 5 degrees colder than in my living room.

Guess I’m going to find out.

If you also want to work on things like this but you don’t like buying all the components and building everything from scratch, Samsara is hiring! You can get essentially the same experience working across the stack and build cool features.

Checkout sensors.

cool for the opening opportunities :).

. More details

Leave a Reply