Ferrite is aimed at journalists, podcasters, lecturers and public speakers, voice-over artists, audiobook producers, and anyone else who needs to record and edit speech on the move.
(Musicians: I haven’t forgotten you! It’s just that I received so much email from non-musicians who really needed something more powerful than the built-in voice recorder, but which didn’t get in their way by doing things like insisting on setting a tempo, creating a project before they can record, limiting recordings to typical pop-song length, or other typical aspects of music packages, that I decided to make something for them!)
I’m a big fan of Swift — Ferrite Recording Studio, the big new app I’m working on, is written in it (more on that another day).
I’ve found it hard to articulate quite why I like Swift so much, though, because it comes down to a “sum of the parts” thing, where Swift has scavenged a bunch of great ideas from other languages and assembled them into a nice, relatively cohesive whole, rather than there being any single feature I can point to and say “There! That thing there is why!”
So, I’ve had a couple of emails recently from people about Mitosynth: how the various modes (Sampler, Blender, Painter, Additive and Gridcøre) work together and how Prefilter fits into the picture. I thought I’d take some time out from working on my new project to write up an explanation and share it with everyone.
First up, a little background…
A week ago, Apple announced release dates and the price list for the Apple Watch, and of course everyone’s going nuts with articles about it. By now, this is no surprise: it’s standard practice every time there’s an announcement from Apple; not only the wave of press directly reporting on it, but also all the litter, the flotsam and jetsam that attempts to surf that wave, desperately trying to catch a sliver of refracted PR.
So, Swift 1.2 was recently released, with lots of changes, mostly for the betterfootnote 1, including fixing several things mentioned in previous articles. Life on the bleeding edge… but, incremental compilation has arrived, so build types are drastically improved in most cases. Error messages are frequently more helpful. Default parameters don’t break trailing closure syntax. And of course there are many other fixes, and a bunch of exciting new things to play with.
Update: A few parts of this post are affected by Swift 1.2. Once it’s out of beta, I’ll do a rewrite to reflect the changes, but in the meantime, I’ve collected the updates together at the end.
Here’s another little toy I’ve been using in Swift. Essentially, it’s a wrapper around dispatch_after() that defers execution of a block of code for a given amount of time. Except that if, when it goes to execute the block, you’ve already scheduled another, it throws the first away:
It’s been quiet for a while here at Wooji Juice: a few small app updates (Mitosynth and Hokusai both have updates currently either in testing or App Store review), but most of the work is happening behind-the-scenes on R&D.
That includes getting to grips with Swift, Apple’s new programming language. It came out back in June at WWDC, and has seen some updates since. So how is it?
Mitosynth 1.2 is now available! Mitosynth remains iOS 7 compatible, but some new features do require iOS 8. Here’s the quick overview of what’s new:
- Automation Step Sequencer
- Pitch & Note Tracking
- MIDI Program Change & Patch Bank mappings
- MIDI Polyphonic Aftertouch
- iPhone performance mode enhancements
- iPhone 6/6+ screen size extensions
- 20 new built-in patches
- iOS 8: IAA Transport Controls
- iOS 8: Bluetooth MIDI Configuration
- iOS 8: Import & Export audio using File Providers
- iOS 8: Audio Plugin support
- iOS 8: Finger-angle sensitivity
So, I’ve been doing some experimentation, and it turns out, one of the new features Apple announced for iOS 8, “Extensions”, seems to work pretty well for implementing audio plugins.
What I mean by that, is allowing one app to add audio commands directly to another app. Note that these are offline commands, rather than realtime ones — Audiobus, Core MIDI and Inter-App Audio are the way to send live audio or MIDI around. But for apps that can do destructive edits to audio, this is pretty nifty. Or even for apps that don’t edit audio at all, but have an audio library.
Since Mitosynth has received a lot of comments regarding its design, I thought it might be nice to look how at that evolved over the course of development.
iOS apps are an interesting design challenge. On the one hand, you have excellent high-res screens, a relatively powerful GPU, and graphics & animation APIs that are for the most part excellent to work with (and probably better than any other platform I’ve developed for). And touch-screens invite direct manipulation, and there’s lots of scope for doing interesting things with multitouch.