To look at how Ferrite Recording Studio‘s design evolved, we need to go back, waaaay back, to late 2012/early 2013, when I was doing some user-interface experimentation.
I’ve been wanting to make a more DAW-like audio editing package for a long time, but I wanted to get the user interface right. Desktop DAWs have always been designed with the precision of the mouse (and lots of keyboard shortcuts) in mind. This is awkward on devices where you’re using fingers to edit on a touchscreen.
Here’s a very common thing in iOS apps: You have a table view, with a list of items, each one a simple line of text, and a checkmark next to the one that’s selected.
Sure, there are often better ways of doing this — particularly for UI that’s at the core of your app — but still. It’s familiar to users, and for things like settings, it’s very useful. Indeed, the iOS system settings are full of ‘em. To take one example to use in this post, in the Messages section, when you tap “Keep Messages”, you can pick “30 Days”, “1 Year” or “Forever”.
There’s a lot of chatter online about different application architecture-patterns (MVC, MVVM, VIPER, etc). I want to talk today about some architectural decisions that are kinda orthogonal to those, that also have pretty big effects on your application.
Towards the end of last year, I released Ferrite Recording Studio, which is a fairly large, sophisticated app written almost entirely in Swift, during a year in which Swift itself was rapidly evolving.
I hear a lot about people taking a wait-and-see approach to Swift, or dipping a toe in by migrating a few pieces here and there, or even in some cases outright rejecting it… but I haven’t heard many people talking about diving in head-first on a big, “Pro App”-sized project. So I thought I’d write up something about it.
Ferrite is aimed at journalists, podcasters, lecturers and public speakers, voice-over artists, audiobook producers, and anyone else who needs to record and edit speech on the move.
(Musicians: I haven’t forgotten you! It’s just that I received so much email from non-musicians who really needed something more powerful than the built-in voice recorder, but which didn’t get in their way by doing things like insisting on setting a tempo, creating a project before they can record, limiting recordings to typical pop-song length, or other typical aspects of music packages, that I decided to make something for them!)
I’m a big fan of Swift — Ferrite Recording Studio, the big new app I’m working on, is written in it (more on that another day).
I’ve found it hard to articulate quite why I like Swift so much, though, because it comes down to a “sum of the parts” thing, where Swift has scavenged a bunch of great ideas from other languages and assembled them into a nice, relatively cohesive whole, rather than there being any single feature I can point to and say “There! That thing there is why!”
So, I’ve had a couple of emails recently from people about Mitosynth: how the various modes (Sampler, Blender, Painter, Additive and Gridcøre) work together and how Prefilter fits into the picture. I thought I’d take some time out from working on my new project to write up an explanation and share it with everyone.
First up, a little background…
A week ago, Apple announced release dates and the price list for the Apple Watch, and of course everyone’s going nuts with articles about it. By now, this is no surprise: it’s standard practice every time there’s an announcement from Apple; not only the wave of press directly reporting on it, but also all the litter, the flotsam and jetsam that attempts to surf that wave, desperately trying to catch a sliver of refracted PR.
So, Swift 1.2 was recently released, with lots of changes, mostly for the betterfootnote 1, including fixing several things mentioned in previous articles. Life on the bleeding edge… but, incremental compilation has arrived, so build types are drastically improved in most cases. Error messages are frequently more helpful. Default parameters don’t break trailing closure syntax. And of course there are many other fixes, and a bunch of exciting new things to play with.
Update: A few parts of this post are affected by Swift 1.2. Once it’s out of beta, I’ll do a rewrite to reflect the changes, but in the meantime, I’ve collected the updates together at the end.
Here’s another little toy I’ve been using in Swift. Essentially, it’s a wrapper around dispatch_after() that defers execution of a block of code for a given amount of time. Except that if, when it goes to execute the block, you’ve already scheduled another, it throws the first away: