SwiftUI and Environment

Sometime, somewhere, I stumbled across a hint about SwiftUI that I had been stumped about. Consider this example:

This is great, and very easy to figure out. But how the heck does this work:

My naive approach involved creating a modifier, which applied a method to each view type I wanted to modify. That seemed like the right thing to do. But then it caused problems when I tried to apply it to *all* view types. It is possible to extend the View protocol to give a default implementation of a custom modifier method, but in the end, this made all my views run that default implementation instead of any custom implementation that class may adopt. And further, how does each view do the kind of introspection necessary for the Group to apply the Font modifier to the objects inside the Group?

It turns out that SwiftUI uses an EnvironmentValue in order to store the default font for any view in a given hierarchy.

https://developer.apple.com/documentation/swiftui/environmentvalues/font

Adopting this pattern made it easier to send information upward throughout the hierarchy of my framework MusicStaffView. For example:

Elements in the ZStack (including the StaffShapeView, which draws the musical staff lines) depend heavily on the vertical space between the staff lines. This is how I developed a better method of resolution independence when the framework moved from rendering images to drawing paths, and it remains a core feature of MusicStaffView today. But it requires that the width of the spaces is computed before the drawing takes place. In UIKit and Cocoa, this was easy, because the rendering happened in the draw() method for a custom subclass of UIView and NSView.

In reality, SwiftUI is calling similar methods to draw the shapes of the notes and other elements, but now it relies on the environment to describe how big to make each of the elements, as well as what their offsets are (i.e. a given note is offset from the center of the staff view by a certain distance depending on which note is being represented). In this way, it can be described by the GeometryProxy in the MusicStaffView itself and propagated to the rest of the views in the staff view, allowing them to draw in the correct size.

M2

I bought a new computer. Well, an old computer. A newer computer from the refurbished store. I appreciate that there are discounts on Apple products, because they can tend to be fairly expensive, and their add-ons are overpriced because they have a captive audience due to the soldered-on nature of the components.

Using the iOS devices without being able to upgrade them hasn’t really been an issue for me. It has trained me to appreciate the contained, singular nature of the devices, and I find that I don’t really mind that I can no longer upgrade the RAM or put in a new HDD to prolong the life of the machine. At some point it just becomes necessary to move to a new model.

I didn’t expect to do this so soon, but I had a couple of friends who bought new M2 macs, and I found myself with the itch to do something about my own setup. So far, it seems to be worth it. The performance gains that are happening with each M-series chip are particularly pronounced, and upgrading from the normal M1 (which felt magical) to the M2 Pro seems to be another level altogether. I didn’t think it would be like this, but that’s because the Intel that we have just came out of was so incremental.

These computers have gotten a bit more expensive. Gone are the days of a $1200, entry-level MacBook Pro. But the cost is becoming more worth it. I bought refurbished so that I could spend on extra RAM (or, unified memory, as we’re calling it now) and a bigger SSD, because I found myself always at the limit of 512GB. 1TB feels like a luxury, but I’m sure it will be a constraint in a couple of years.

Still, I find it hard to spend money on luxuries. But this seems to be worth it so far, and I think we are in a place where we won’t feel like the money went missing.

Writing

I haven’t done a lot of writing here, mostly because I forgot about it, but here I am trying to decide which parts of my website are worth saving and which parts are going to be purged. Personally, I feel like I could benefit from quite a lot more writing, even if I never read the things again. But this place was mostly going to be for things that I learned while coding. I haven’t done a lot of very useful coding recently. I thought I would do more, especially during the pandemic, but that seems to have not been the case. It was really weird how, suddenly with so much free time, I just didn’t want to do any of the things I always felt I would do if I only had the time. This seems to have been a common feeling among people.

Currently, I’m working on a number of projects that I don’t know will end up finished. First of all is a music staff view library that compliments the Music library that I wrote a number of years ago. The staff view could be helpful in many projects, but first I need to give it basic functionality (which gives me a good idea about how to proceed with a version 1.0).

The second of these comes out of trying to create a DSL, or Domain Specific Language, in Swift, which turned out to be a really cool idea. The eventual goal of the staff view library is going to be to represent code in Swift/SwiftUI in the following manner:

That looks very clean and concise to me. Now to figure out how the heck to make it draw. I wrote the original version so that anything could adopt a specific protocol called MusicStaffViewElement, which describes the necessary methods required for the element to be rendered by the view. This tends to make things easy for simple setups like the one above. But how do you draw more complicated groups like notes with accidentals or notes grouped by a beam? That kind of complicated rendering makes me want to create more complex data structures like beam groups and the like, but it feels like that’s too complicated for the SwiftUI rendering system (it probably isn’t, but it was too complicated for my UIView rendering system, so it might yet be).

I thought about drawing note heads separately, and then computing how the rest of the parts of the notes—such as stems and flags and dots—should draw. But that’s also complicated. So then, maybe it becomes a question of creating new elements, like NoteWithAccidental or NoteBeamGroup. But does that defeat the purpose of making the MusicStaffViewElement protocol so easy to draw? It’s a bit of a feedback loop still.

Politics

I see a lot of people asking for the abolition of the Electoral College and blaming that particular system for the current political situation. It’s true that Bush and Trump won elections even though they lost the popular vote (note, though, that Bush carried the popular vote by a considerable margin in 2004 and only lost by half a percent in 2000 — 2% for Trump was a much bigger fluke).

There are at least two problems in getting rid of the system as it is. The first is tradition and momentum. It’s hard to get people on board with something that has failed so spectacularly only once (see above). The second is that the status quo gives an advantage to the party that has spent the last 30-40 years beefing up their majorities in state government, so at least half of the government will be against it.

If you’ve bargained before, you’ll understand that a change this fundamental only will come via a compromise that will fundamentally change something from that you like from your side. What’s worth giving up to change a system that usually produces a consistent result?

Would you give up separation of Church and State? Would you accept stricter voter ID laws or citizenship requirements? Would you give up the ACA and Social Security? Would you accept that the government’s budget must be balanced every fiscal year?

Swift is a Protocol-oriented Language

This video from WWDC 2015 has really been resonating with me recently.

I’ve been transitioning a couple of large projects from class-based, Objective-C code to a more Struct-based, Swift approach. The revelation comes in the form of embracing protocol-oriented design and what that means for the things that I do.

These projects are all music based. They all need to represent music in some way. So I have been writing a Music representation library to unify and codify the things that I need to do. But here’s an example of the embrace of the protocol:

MusicTransposable Protocol

Music pitches can be transposed, or moved, in order to represent other pitches. For example, C0 transposed by a Major Third becomes E0. The MusicTransposable protocol should be adopted by any type that can be transposed by an interval. Default implementations are available for MusicPitch and types that adopt MusicPitchCollection (i.e. MusicChords can be transposed by transposing their individual pitches).

Here’s how it works:

From here, we can extend MusicPitch to adopt the protocol, using MusicInterval’s destinationPitch method to compute a destination pitch from a root pitch:

But what about MusicPitchCollection, a protocol for managing a collection of pitches, such as a Scale or Chord:

Notice that I am requiring MusicPitchCollection adoptees to also adopt MusicTransposable. But since they are simply collections of pitches, which are already transposable, maybe they should just transpose their individual pitches:

It should be clear by now that any collection of pitches adopts MusicPitchCollection and immediately is able to transpose those pitches without writing any extra code beyond a declaration of an array to store those pitches. This kind of out-of-the-box functionality is amazing.

The next big step involves rewriting the drawing code that displays notes and rhythms. Stay tuned, because protocol adoption makes this problem much more general and involves a much cleaner codebase that was just not possible before.

“Hidden” Functions

One of the things that bugs me about Swift is how it can be difficult to find documentation on some of the global functions. Take, for example numericCast(_:):

Convert x to type U, trapping on overflow in -Onone and -O builds.

Typically used to do conversion to any contextually-deduced integer type

Neat. Wait, what?

Huh. Okay, so maybe this will work to convert the UInt16 that I chose for my latest CoreData project:

That’s it. It’s context aware, so the compiler will figure out the type (as long as theValue specifies its type explicitly), and there are four versions (one for each combination of Signed vs Unsigned integer).

Goals

I used to journal on a regular basis, but the habit has been somewhat lost over the past few years. One of the things that I did regularly was to write a few goals and fears to see how they changed over the years.

Okay, regularly isn’t fair because I think I did it all of twice. But I like to have a place that I can write down some ideas about things that I want to remember.

To that end, my goals for writing code as of today:

  1. Think before you do something
  2. Comment everything that is being worked on, regardless of the status
  3. Break up sections of code into smaller functions

The goal of all three of these is to be more efficient, but #2 looks like more work than needs to be done. But I found some old code last night that I had documented completely. It was kind of neat because there were descriptions of how things worked and what didn’t work. And this is something that was written years ago.

Anyway, just some ideas. Let’s see how well they work.

Value Types vs Reference Types

So, watching the 2015 WWDC Videos turned me on to value types as opposed to reference types. Three weeks ago, I understood that such things existed, but didn’t have any clue how they worked. This comes from a deep distrust of pointers when I was first coding in the mid-90s (all of my friends opted for C++, but I didn’t want to have to deal with pointers, so I picked C — I’m going to guess that not using Windows also pushed me in that direction).

Continue reading

Manifesto

With the advent of Swift in the Apple Developer ecosystem, I’ve been learning quite a bit about the way that things work under the hood. Let’s get one thing straight, though: I am not a trained computer programmer. I’ve tried to take algorithm classes, but everything goes over my head because I don’t really want to spend the time figuring out what everything actually does. Everything I have learned is from trial and error (mostly through the errors).

So, bear with me, because things on this blog are about to get really simplistic. The point is just to have another entry into (hopefully) correct answers when you google something that you don’t understand. Who knows if it will work.

Radar

Apple Bug Reporting has previously been referred to as Radar, which is the name of the bug tracking tool. Recently, I’ve been sending bug reports for the new version of Swift. Version 1.2 had this issue that crashed the compiler with a SegFault:

let search = searchTerm.lowercaseString //EXC_BAD_ACCESS
let search = (searchTerm as NSString).lowercaseString //Works as expected

But what’s this? Swift String is supposed to be bridged to NSString. I should be able to use lowercaseString for either type. And what’s this bad access? A built-in function is accessing an invalid portion of memory? Since this doesn’t raise a compiler warning, most likely it’s not intended behavior.

So, I filed a bug – number 19848611 – Swift String.lowercaseString causes BAD_ACCESS, which eventually was found to be a duplicate of 19801253.

And today, reading through the iOS 8.3 Beta 2 change log:

Fixed a use after free crash in lowercaseString and uppercaseString. (19801253)

Some object in there was being released and then accessed, causing the bad access. But it’s fixed! File your Radars!