Why care about MIT App Inventor?

The other day I made a post about using App Inventor to create a simple UI for the Particle Photon. I have also been working on a project that involves an internet connected Weather Cloud. To make it easy for the owner to control this cloud, I have put together and Android app.

image

This works much faster that using something like IFTTT and requires no additional libraries like Blynk.

This brings up an interesting observation when it comes to IOT. If you want to interact with your devices, there are not a lot of good ways to go about it.
You can throw some Ajax onto your web server. If you have a web server that is.
Connect your devices API to another companies API and send your data on an epic journey when you are often in the same room as the device you are sending data to.
Make your own API and take pride that you have joined a legacy of frustrated developers world wide. Then scrap it all to use IFTTT just so much easier.
You could add a few new libraries to your firmware so that it can talk to a custom UI app like Blynk. This is actually not a bad option except that I should not have to add a new library to exploit the basic HTTP interaction protocol that comes standard with the Particle API.
Going back to the web server thing, you can just use a simple HTML page and form data for interaction. This just leaves your device credentials open for the world to see.

This is the problem with IOT right now. There are lots of ways to interact with devices, but no really good way to just go from user to device for the average person. App Inventor gives us a quick fix by letting us create an app that just uses HTTP GET and POST requests while keeping the credentials private. However, we need something better. An app that uses basic the preexisting protocol with an intuitive UI.

This is what I am working on at the moment with Vince at SoDo MakerSpace. An app like Blynk with no additional libraries, IFTTT without the additional API hooks, and Ajax without all of the coding.

I feel like this will help to open the door to the dream of IOT for the masses.

OpenCV and Python Color Detection – PyImageSearch

I am working on some stuff at the Makerspace that involves computer vision. Hopefully this information will help. Currently we are trying to do this another way, but it is just getting to be more complicated than I think it should be.

Python code for taking a photo and saving it:

https://www.raspberrypi.org/documentation/usage/camera/python/README.md

Button Pressed code:

This should listen for the button to be pressed, then it will run your code. If the code returns Test as “open”, Pin 18 goes high.

The image detection part of the code. I have yet to get this going.

http://www.pyimagesearch.com/2014/08/04/opencv-python-color-detection/

Coding for the LEDiva™

Tonight I got some real headway on my small LED driver. I added the ability to change the number of NeoPixels that are displayed, then update the onboard flash memory so that the ATtiny remembers what you selected even after a power cycle.

So now if you press and hold either of the two buttons, then turn the board on, it will let you select the number of LEDs to use. Just press the Color or Mode buttons to add or remove an LED address.

Q: why would I want to change the number of LEDs that I am using on the fly?

A: Basically, the LEDiva™ is intended to be a generic all purpose NeoPixel driver. You could use 140 NeoPixels, or just one. Some of the animations, like colorWipe();, start with the first LED address and loop until it gets to the last LED, as defined by the number of Pixels the NeoPixel library was told are available on startup. There is no way for the LEDs to tell the micro controller how many of them you just connected. So the animations will just run all the way out to 140 LEDs be damned. This can slow things as down if you are only using 3 physical LED. To fix this I simply tell the NeoPixel library that I want to use all 140, but make it so that the loop that updates the LEDs stops when it reaches the user defined number of LEDs.

So if you use the new sequence to tell the LEDiva™ that you want to use 3 LEDs, it will only loop LED updates 3 time rather than 140. Now that I have added storage to the EEPROM, it will remember the number of LEDs you selected the next time you turn it on. Are it and forget it.

Here is the latest code I have:

Now I just need to save the color and mode to EEPROM, but I will save that for another day :)