Last week, Apple announced their new iPhone 7 and iPhone 7 Plus devices, and the internet collectively flipped its shit. The reason? The removal of the standard headphone socket.
Like any product engineering decision, this one comes with trade-offs. There are advantages and disadvantages; if the disadvantages outweigh the advantages, for you personally, it’s completely reasonable for you to be disappointed.
On the other hand, the degree of shit-flipping over the past few days has been spectacular — and most of it has been based on what could, charitably, be called misunderstandings at best.
Hokusai Audio Editor is one of Wooji Juice’s oldest apps — it’s been on the App Store since 2011, and work started on it even earlier, in June of 2010. It’s received a whole bunch of updates since then, but a lot has changed in the world of iOS in the past six years, and it was time to bring Hokusai up-to-date in a more comprehensive way.
It’s funny, back when I first announced Ferrite Recording Studio, I was expecting a whole bunch of folks to ask how it was different from Hokusai Audio Editor. I mean, it’s understandable that people would ask: both Ferrite and Hokusai are iOS apps that let you record and edit audio. But no-one did — even months after release, still nothing.
But I recently announced that a major update to Hokusai is around the corner, and suddenly lots of people are asking.
To look at how Ferrite Recording Studio‘s design evolved, we need to go back, waaaay back, to late 2012/early 2013, when I was doing some user-interface experimentation.
I’ve been wanting to make a more DAW-like audio editing package for a long time, but I wanted to get the user interface right. Desktop DAWs have always been designed with the precision of the mouse (and lots of keyboard shortcuts) in mind. This is awkward on devices where you’re using fingers to edit on a touchscreen.
Here’s a very common thing in iOS apps: You have a table view, with a list of items, each one a simple line of text, and a checkmark next to the one that’s selected.
Sure, there are often better ways of doing this — particularly for UI that’s at the core of your app — but still. It’s familiar to users, and for things like settings, it’s very useful. Indeed, the iOS system settings are full of ‘em. To take one example to use in this post, in the Messages section, when you tap “Keep Messages”, you can pick “30 Days”, “1 Year” or “Forever”.
There’s a lot of chatter online about different application architecture-patterns (MVC, MVVM, VIPER, etc). I want to talk today about some architectural decisions that are kinda orthogonal to those, that also have pretty big effects on your application.
Towards the end of last year, I released Ferrite Recording Studio, which is a fairly large, sophisticated app written almost entirely in Swift, during a year in which Swift itself was rapidly evolving.
I hear a lot about people taking a wait-and-see approach to Swift, or dipping a toe in by migrating a few pieces here and there, or even in some cases outright rejecting it… but I haven’t heard many people talking about diving in head-first on a big, “Pro App”-sized project. So I thought I’d write up something about it.
Ferrite is aimed at journalists, podcasters, lecturers and public speakers, voice-over artists, audiobook producers, and anyone else who needs to record and edit speech on the move.
(Musicians: I haven’t forgotten you! It’s just that I received so much email from non-musicians who really needed something more powerful than the built-in voice recorder, but which didn’t get in their way by doing things like insisting on setting a tempo, creating a project before they can record, limiting recordings to typical pop-song length, or other typical aspects of music packages, that I decided to make something for them!)
I’m a big fan of Swift — Ferrite Recording Studio, the big new app I’m working on, is written in it (more on that another day).
I’ve found it hard to articulate quite why I like Swift so much, though, because it comes down to a “sum of the parts” thing, where Swift has scavenged a bunch of great ideas from other languages and assembled them into a nice, relatively cohesive whole, rather than there being any single feature I can point to and say “There! That thing there is why!”
So, I’ve had a couple of emails recently from people about Mitosynth: how the various modes (Sampler, Blender, Painter, Additive and Gridcøre) work together and how Prefilter fits into the picture. I thought I’d take some time out from working on my new project to write up an explanation and share it with everyone.
First up, a little background…