Jim's Depository

this code is not yet written
 

It is worth knowing that libicu likes to append version strings to its API symbols behind your back. If your header files do not match your libraries, like say you have the Swift compiler installed on Debian and it has replaced /usr/include/unicode/ with its own nailed down version, then you try to link symbols like utext_close_swift_65 or similar and you will be unhappy.

I removed the swift compiler and reinstalled the libicu-dev package and all is good now.

Sometimes I want to work on my experimental operating system when I'm away from my Linux monsters. To that end I keep the ability to build it on macOS. Broadly speaking that means a qemu built to support SDL because the Cocoa one in brew hasn't worked in years.

But the wrinkle… you can't use the qemu -kernel xxx flags with 64 bit multiboot kernels. It appears to be a "won't fix" from qemu, so I need to make ISO images, but making a working, modern grub on macOS cross compiling from ARM is not anything the grub guys envisioned.

Note: You will find people using legacy grub with just an eltorito.bin. This won't work well for 64 bit x86, you have to relink to elf32 then your symbol entries are all in the wrong format and insanity ensues.

So this is how I tackle it:

  • Use mkisofs from cdrtools from brew to build the ISO image.

  • Rip the guts out of an ISO I made on a Linux machine using grub and stuff those into my ISO.

If you mount an ISO you have made, you will find a boot/grub directory. Take that and stick it into your ISO, add your own boot/grub/grub.cfg, pack it all up and you are ready to go.

The relevant part of my Makefile looks a bit like this…

iso/boot/grub/grub.cfg : $(srcdir)/scripts/grub.cfg | iso/boot/grub
        cp $< $@

iso/boot/grub/i386-pc/eltorito.img : $(srcdir)/scripts/grub.tar.gz | iso/boot/grub
        tar -C iso/boot -xf $<

os.iso: iso/boot/kernel.elf iso/boot/grub/grub.cfg iso/boot/grub/i386-pc/eltorito.img
        mkisofs -R -b boot/grub/i386-pc/eltorito.img -no-emul-boot -boot-load-size 4 -boot-info-table -o $@ iso

You've noticed I grabbed a grub.tar.gz from someplace you can't see. I've added it as an attachment to this post. I'm using a single file (eltorito.img) as a proxy for the whole thing being unpacked. Don't fiddle with the pieces and expect make to notice.

Attachments

grub.tar.gz 3596975 bytes

The pinouts diagram for the Raspberry Pi Pico is miserable when printed on a monochrome laser printer. Fortunately is was in SVG so I took a copy, drug it into Graphic (the tragic rename by Autocad) and made it boring but legible. (Assuming you are on a white background. It is displayed here on the view default, and in dark mode looks like garbage).

I don't print enough color to keep the ink live in my inkjet printers and HP abandoned my networked color laser printer (then a mouse ate part of it).

If you find yourself in the same position, then enjoy! Here it its PDF and SVG glory.

alt text

The PDF is down in the attachments.

Attachments

The Google scanned indicator has been updated to this decade's aesthetic. Also, it and the verified user indicator have been brought into the CSS to prevent loading image resources.

I'm not totally happy about that. They are spans with images embedded, I'd rather have them be images with shared content for the sake of alt tags. But oh well.

Glad to see you tested the colors in light mode this time.

Beginning now, well, slightly ago… all post authors get a blue checkmark by their name.

This feature sucks. It has poor contrast in light mode and hurts my eyes. You should do broader accessibility testing before rolling out major new features. I think the developers must all use dark mode all the time.

Hithertofore I have caught all port 80 requests to my web sites and helpfully redirected them to https on port 443. But not everyone wants to have an SSL capable browser, and SSL on a tiny embedded device which you expect to run for more than 10 years without a firmware replacement is a mess.

Enter Upgrade-Insecure-Requests

Some modern browsers will send a Upgrade-Insecure-Requests: 1 header when they request a non-SSL resource. This instructs the server that the browser would be happy to use SSL if only the server could redirect it to an appropriate URL.

I'm using nginx for all my web servers these days. (I tried caddy and was largely liking it, but got tripped up on X-Accel-Redirect support which is required by this blog software among other things.) So, here is a sketch of my nginx configuration file for a simple web site.

server {
    server_name  yourserver.example.com;

    listen 80;
    listen [::]:80;
    listen 443 ssl;
    listen  [::]:443;

    … A bunch of site configuration which is not germane …

    # Concatenating two values to get cheap logic: "on1" "on" "1" "" are possible here.                                                  
    set $do_http_upgrade "$https$http_upgrade_insecure_requests";

    location / {
        if ($do_http_upgrade = "1") {
            add_header Vary Upgrade-Insecure-Requests;
            return 307 https://$host$request_uri;
        }

        index  index.html;
    }
}

The important takeaways here are:

  • I'm doing all my listen in the same server block

  • That $do_http_upgrade variable is making a cheap and function by concatenating two values so when I check for "1" I am testing not https and upgrade_insecure_requests present.

  • Down in the location I do the redirect if requested and needed.

  • That vary header comes from an MDN example. Maybe it will keep some forsaken middleware box from inappropriately caching the upgrade, but I'm sure there are ones where it won't. Their problem. Not mine. You might also try a never cache header to try to keep the middleware boxes from breaking your site.

Wasted time: 10 hours.

I wanted to include a library, which I maintain, inside an ESP-IDF project. There is a nasty set of interactions between idf_component_register() and ExternalProject_Add() which make this brutally fragile.

To find your .h files in your project, you are going to have to have a INCLUDE_DIRS in your idf_component_register(). This is going to prevent you from using the download support built into ExternalProject_Add() since the files won't exist when they are checked. – So you are going to put your library in as a git submodule.

I settled on this layout to balancing sane paths in the component's CMakeLists.txt file and polluting the library. (For purposes of this example, pretend my library is named tomlbed, since it is.)

MyProject
   components
      tomlbed
         CMakeLists.txt
         tomlbed-upstream   <<---- this is my submodule

This keeps the ESP-IDF stuff out of my library. Now for the results of 10 hours of googling and whacking about, the CMakeLists.txt file…

#
# We have to make the component know where its .h files are
# and where to find its .a file.
#
idf_component_register( INCLUDE_DIRS ${COMPONENT_DIR}/tomlbed-upstream/include )
target_link_libraries( ${COMPONENT_LIB} INTERFACE ${CMAKE_BINARY_DIR}/esp-idf/tomlbed/libtomlbed.a )

#
# Declare our external project.
# I believe the BUILD_BYPRODUCTS interacts with the
# 'target_link_libraries' above to force this to build.
#
ExternalProject_Add( tomlbed_build
                     PREFIX ${COMPONENT_DIR}
                     SOURCE_DIR ${COMPONENT_DIR}/tomlbed-upstream
                     DOWNLOAD_COMMAND ""
                     CONFIGURE_COMMAND ""
                     BUILD_IN_SOURCE 1
                     BUILD_COMMAND make CC=${CMAKE_C_COMPILER} CFLAGS=${CMAKE_C_FLAGS} AR=${CMAKE_AR} lib
                     INSTALL_COMMAND make CC=${CMAKE_C_COMPILER} CFLAGS=${CMAKE_C_FLAGS} AR=${CMAKE_AR} install libdir=${CMAKE_BINARY_DIR}/esp-idf/tomlbed includedir=${CMAKE_BINARY_DIR}/esp-idf/tomlbed
                     BUILD_BYPRODUCTS ${CMAKE_BINARY_DIR}/esp-idf/tomlbed/libtomlbed.a ${CMAKE_BINARY_DIR}/esp-idf/tomlbed/include
                     BUILD_ALWAYS 1
                     )

#
# Get that SOURCE_DIR variable hauled out so I can use it
#
ExternalProject_Get_Property( tomlbed_build SOURCE_DIR )

#
# Make our local 'build' directory get wiped on a Cmake 'clean'
#
set_directory_properties( PROPERTIES ADDITIONAL_CLEAN_FILES "${SOURCE_DIR}/build")

There's a lot in there that is finicky.

  • DOWNLOAD_COMMAND and CONFIGURE_COMMAND are disabled with the empty strings.

  • You must get the CC, CFLAGS, AR, and any other binutil type command and flag dredged out of CMake and sent down to your build command (and install if it could use them) or you will not be cross compiling. One symptom is your final link says it can't find your symbols, even though you can see them in the .a file and see the .a file passed in to the link.

  • BUILD_BYPRODUCTS is telling CMake that you will produce these files. It lets you hook into the idf_component_register() and its target.

  • COMPONENT_LIB is the CMake target for the idf_component_register().

  • I am building in the source tree as far as ESP-IDF knows. The library has its own build tree support and uses that, then hauls its build products out into the ESP-IDF locations with its INSTALL_COMMAND.

  • I didn't find a place for a "clean" command. The ADDITIONAL_CLEAN_FILES lets me at least get my build directory wiped. Not a great solution. Be aware, there is also an obsolete variant of that name with "MAKE" in it, which silently fails since ninja ignores it. Don't copy and paste that one from other sources.

  • If you do try to get the idf_component_register git download commands to work, be careful. It keeps getting into a detached head state for me, and I couldn't convince myself it wouldn't wipe out my work. For a while I was using DOWNLOAD_COMMAND, but it get trying to overwrite my work. Submodule seems safer since CMake will keep its fingers off.

Hey Jim,

Did that approach really work for you??

I am tryin to do the same with the C/C++ GNU Scientific Library (GSL) as an external library or component in an ESP-IDF project

Link GNU Scientific Library (GSL): https://www.gnu.org/software/gsl/

Mostly this is just notes to myself, but I'll document the process. It took me 30 minutes. It will take you less since you will read my third bullet point.

  • Go see ESP32's Standard Toolchain Setup for Linux and macOS

  • You are going to install pip and brew to get packages. I would prefer not to have brew on my system, but it is the cost of using the ESP-IDF.

  • The first instruction is to sudo easy_install pip. This does not work. There is not easy_install command. No answer found to that in a quick googling. I ultimately skipped it and everything seems to be working fine.

  • Install Homebrew. Be aware that if their site is every hacked you will give control of your Mac to the attackers by following the installation instructions. Also, this wants to install the Xcode command line tools, so that can take a while. Even if you already have Xcode. Pay attention at the end, there are two commands you need to execute in your terminal.

  • Get the tools used to build ESP-IDF from brew. brew install cmake ninja dfu-util

  • Python3 checks… my clean Monterey system comes with python3 install and not python2. Sounds ok.

  • Get ESP-IDF. You will make a directory first. The documents suggest ~/esp, but I put mine in ~/coding/esp. We'll see if that strike me dead later. This is about a ½ GB download.

  • Hop on down into esp-idf and ./install.sh esp32 to install the cross compilers and linkers.

  • Sent your environment variables. You need to do this each time you want to do do ESP32 work. or put it in your .zprofile or whatever your login script is… . ~/WHERE_YOU_PUT_ESP-IDF/export.sh. Don't miss the first . in that command. You need to execute it in your top level shell process, not as a subshell so the environment variables stick.

  • See if things are working… cd examples/get-started/hello_world then idf.py build. It should end making a ".bin" file and suggesting a flash command.

  • Congratulations! You are ready to develop. The above steps took me 30 minutes, but a good chunk of that was trying to find easy_install. It turns out you don't need it.

The ESP32 HTTPS over the air update mechanism requires you to know the SSL certificate used by the web server. This is problematic in a letsencrypt, fast expiring certificate world, but also for devices which will be deployed for long time frames.

It is possible to disable the SSL using the CONFIG_OTA_ALLOW_HTTP option. The SDK will tell you this should only be used for development, but if you also used signed firmware it is safe for deployed use.

Rather than protect the pipe the firmware traverses and blindly accepting anything coming down that pipe, you will instead not trust the pipe and validate the firmware as it arrives.

See the Secure OTA Updates Without Secure boot section. In a nutshell, you will make a private signing key and the OTA updates will be checked against that key.

Notes Thee Well!

If your firmware itself is sensitive, then don't do this. It can be snooped in transit. On the other hand, in the regular HTTPS scheme there is a URL which provides a copy of your firmware, so you are probably already working on something for that.

Apparently you can use methods names as first class values in Swift. It must not be a popular feature, I can't find it in the official documentation, but it is there and is just the trick when you want to pass a method selector into a function to, say operate on a complex graph of similarly base classed instances.

The short answer is the "SomeClass.someMethod" gives you a function, which when applied to an instance gives you another function, which when you call that one with the methods arguments (without the labels) invokes the method on the object.

Playground Example

All output is what you'd expect.

//
// A base class and a derived class.
//
class Base {
    func blurt( capitalized:Bool) -> String {
        return "I am \( capitalized ? "Base" : "base")"
    }
}

class Derived : Base {
    override func blurt( capitalized:Bool) -> String {
        return "I am \( capitalized ? "Derived" : "derived")"
    }
}

//
// … an instance of each …
//
let base = Base()
let derived = Derived( )

//
// … they do what you expect …
//
print( base.blurt( capitalized: true ))
print( derived.blurt( capitalized: true ))

//
// HERE BE MAGIC: we grab a value for the method itself,
// it gives us a function which produces a function to
// invoke the method on an instance.
//
// (You can infer the type, I just put it in so
//  you can see it.)
//
let m : (Base) -> (Bool) -> String = Base.blurt

//
// Get a function for each of our instances which invokes
// our method.
//
let fDerived : (Bool) -> String = m(derived)
let fBase : (Bool) -> String = m(base)

//
// And invoke them. Notice, we lost our argument names.
//
print( fBase( false) )
print( fDerived( false) )

//
// Once you understand the extra layer of function here,
// you can invoke them like this.
//
print( m(base)(true) )

//
// Limitation: I was unable to tease out a syntax to
// work with polymorphic methods.
//

Open Questions

  • Where is this in the documents?

  • Is there a way to do this with polymorphic methods? By which I mean something like let m = SomeClass.someMethod( onArray:[Array]) or some such if I have someMethod for both arrays and strings or something.

more articles