Hacking with Swift – Learning Project 13

So this is Day56 for me but I am tackling Day52 of the 100 Days of Swift learning path. It seems I cannot find the time to catch up or the topics are so hard for me that I need time to finish the challenges before moving on.

The app we are going to build in this project is called Instafilter.

We are going to learn about UISlider and Core Image real-time effects.

Setting up

In project 10 we learned how to select and import a picture from a user’s photo library and in this project we are going to do the opposite thing, that is writing images back to the photo library.

This project lets users choose a picture from the library and edit it with a series of Core Image filters before saving it back to the library.

Designing the interface

I will divide this in steps so that it is easier to follow:

  1. In Main.storyboard embed the view controller inside a navigation controller
  2. Drag a UIView inside the view controller with size 375 x 470, positioned with a slight inset (about 20 pts) from the top left corner (here the article is misleading so just follow the video tutorial). In the Attributes Inspector, change the background color to “Dark Grey Color”.
  3. Drag an Image View inside the view with size 355 x 450, x: 10, y: 10. Image view’s mode should already “Aspect Fit” if you use Xcode 10.2 or later.
  4. Drag a label just below the View. Also here the article gives different information compared to the video. In the video the views are just dragged around while in the article their size and position is hardcoded. Change the label’s text to “Intensity” and—if you want to follow the text—make it right aligned (I did so).
  5. Drop a slider next to the label, dragging it all the way to the next side of the screen.
  6. Place two buttons: the first 120 x 44, attached to the left edge of the screen just below the label, with a title of “Change Filter”, the second 60 x 44 on the other edge, with a title of “Save”.
  7. Select the View Controller > Editor > Resolve Auto Layout Issues > Reset To Suggested Constraints. If you followed the video you should change one of the buttons constraints to 20 and Update Frames otherwise everything will look fine.
  8. Switch to the Assistant Editor and create outlets for the image view and the slider, actions for the two buttons and for the slider’s intensity.

Importing a picture

Let’s continue from where we left:

  1. In ViewController.swift add a property to store the current image of type UIImage! (implicitly unwrapped)
  2. In viewDidLoad assign “Instafilter” to the view controller’s title property and add a right bar button item with system item .add, target self and action #selector(importPicture).
  3. Write the importPicture method: declare an image picker controller, set it to be editable via the .allowsEditing property, set the view controller to be its delegate and present it with a standard animation. Be sure to conform to UIImagePickerControllerDelegate and UINavigationControllerDelegate before moving on.
  4. In Info.plist add the “Privacy – Photo Library Additions Usage Description” key giving it a value of “We need to work with your photos” (or “We need to import photos”) if you are following the text instead of the videos.
  5. Implement the method for when the user has finished selecting a picture using the image picker, that is the didFinishPickingMediaWithInfo. Make sure (guard let) that there is an image with the = info[.editedImage] as? UIImage call, then dismiss the controller and set the currentImage to be the found image.

Quote of Day 53

A picture is worth a thousand words; an interface is worth a thousand pictures!

Ben Shneiderman, CS Professor at the University of Maryland

Applying filters: CIContext and CIFilter

  1. Import CoreImage (it is such a vast topic that I will study it before doing the next challenges and write a separate article about that).
  2. Add two new properties, one for a CoreImage Context and one for a CoreImage Filter, both implicitly unwrapped. A CIContext is a subclass of NSObject that provides an evaluation context for rendering image processing results and performing image analysis while a CIFilter is an image processor that produces an image by manipulating one or more input images or by generating new image data (also a subclass of NSObject. Before moving on let’s instantiate both of them in viewDidLoad(), with a filter named “CISepiaTone”.
  3. Inside didFinishPickingMediaWithInfo set the currentImage property to be the input for the currentFilter. To do that we need to convert it to a CIImage object. I found very interesting what the Documentation has to say about this:

[A CIImage object is] a representation of an image to be processed or produced by Core Image filters. […] Although a CIImage object has image data associated with it, it is not an image. You can think of a CIImage object as an image “recipe”. A CIImage object has all the information necessary to produce an image, but Core Image doesn’t actually render an image until it is told to do so. This lazy evaluation allows Core Image to operate as efficiently as possible.

After that we set currentFilter.setValue(beginImage, forKey: kCIInputImageKey). But what is this last key? It is defined as “A key for the CIImage object to use as an input image”. This doesn’t really solve my doubts even if Paul says this to be self-explanatory but let’s move on. Call the not yet created applyProcessing() method just below that (and also inside the intensityChanged method. This will make sure that the new method is called as soon as the image is imported and then whenever the slider is moved.

  1. Write the first version of the applyProcessing method. I will use the video version once more because its syntax is slightly different from the one used in the text. First be sure that there is an image attached to our filter through a guard let statement (that is, safely read the output image from the current filter); set the value of the intensity slider the set the kCIInputIntensityKey value of the current Core Image filter; then verify that it is possibly to create a CGImage from our context (that is a Quartz 2D image from a region of a Core Image image object). This renders a region of an image (in this case all of it, which is the meaning of image.extent) into a temporary buffer using the context, then creates and returns a Quartz 2D image with the results. If this succeeds (an if let is needed because the createCGImage method returns an optional) store this image into a UIImage wrapper and set it as the imageView.image.
  2. Fill in the changeFilter method with an alert controller’s action sheet which displays all (or the most interesting) filters. Please be extra careful (not like me) when writing this because Core Image and Xcode will not warn you if you made a mistake in the name of the filter. It will not tell you “filter id not recognised”. No… it will just crash your app at a line that, sincerely, at least to me, doesn’t make that much sense. Each alert action will have the filter’s name as title, the default style and the yet unwritten setFilter method as handler. An extra action will contain the cancel button to avoid changing the selected filter.
  3. Write the setFilter method which should update the currentFilter property for the filter that was chosen, set the kCIInputImageKey and call applyProcessing. As this is an “action method” it needs to have a UIAlertAction as its only parameter. So, make sure there is a valid image before continuing (guard let), safely read the alert action’s title (another guard let), set currentFilter = CIFilter(name: actionTitle), fill in the same three lines we had at the end of the didFinishPickingMediaWithInfo method. I wonder if we could refactor this…
  4. As not every filter has an intensity setting the app will crash if we try to modify the intensity of a filter that doesn’t have it. For full description of the filters’ keys go to this web page in the Apple Documentation. Knowing that each filter has a property that returns an array of all the keys it can support (inputKeys), store its return value in a constant and use it in conjunction with the contains() method to see if it does contain it and, if so, use it. This will add these five new lines to the applyProcessing method.
let inputKeys = currentFilter.inputKeys
if inputKeys.contains(kCIInputIntensityKey) { currentFilter.setValue(intensity.value, forKey: kCIInputIntensityKey) }
if inputKeys.contains(kCIInputRadiusKey) { currentFilter.setValue(intensity.value * 200, forKey: kCIInputRadiusKey) }
if inputKeys.contains(kCIInputScaleKey) { currentFilter.setValue(intensity.value * 10, forKey: kCIInputScaleKey) }
if inputKeys.contains(kCIInputCenterKey) { currentFilter.setValue(CIVector(x: currentImage.size.width / 2, y: currentImage.size.height / 2), forKey: kCIInputCenterKey) }

Saving to the iOS photo library

  1. Familiarise with the new UIImageWriteToSavedPhotosAlbum() method. The first, image, is the the image to write to the Camera Roll album; the second, completionTarget, optionally contains the object whose selector should be called after the image has been written to the Camera Roll album (in this case, self, the view controller); the third, completionSelector, contains the method selector of the completionTarget object to call. It is an optional method but should conform to a very specific format:
- (void)image:(UIImage *)image
    didFinishSavingWithError:(NSError *)error
                 contextInfo:(void *)contextInfo;

This is much clearer than I thought Objective-C could be: it is a method which returns void (that is, nothing), called image, takes a UIImage as its first parameter, an optional Error as second and an UnsafeRawPointer as third. This last one allows us to access and manage raw bytes in memory, whether or not that memory has been bound to a specific type.

Finally, the fourth parameter contains an optional pointer to any context-specific data that one wants passed to the completionSelector.

  1. So, inside the save method, be sure that there is an image inside the image view and then call UIImageWriteToSavedPhotosAlbum(image, self, #selector(image(_:didFinishSavingWithError:contextInfo:)), nil). After the preceding explanation this should not be too scary.
  2. Finally, complete the image(_:didFinishSavingWithError:) method with two different alert controllers, one if there is an error and another if there is not.

Voilà! The project is finished!

Please don’t forget to drop a hello and a thank you to Paul for all his great work (you can find him on Twitter) and don’t forget to visit the 100 Days Of Swift initiative page.

You can find the repo for this project here.

Thank you!


If you like what I’m doing here please consider liking this article and sharing it with some of your peers. If you are feeling like being really awesome, please consider making a small donation to support my studies and my writing (please appreciate that I am not using advertisement on my articles).

If you are interested in my music engraving and my publications don’t forget visit my Facebook page and the pages where I publish my scores (Gumroad, SheetMusicPlus, ScoreExchange and on Apple Books).

You can also support me by buying Paul Hudson’s books from this Affiliate Link.

Anyways, thank you so much for reading!

Till the next one!

Published by Michele Galvagno

Professional Musical Scores Designer and Engraver Graduated Classical Musician (cello) and Teacher Tech Enthusiast and Apprentice iOS / macOS Developer Grafico di Partiture Musicali Professionista Musicista classico diplomato (violoncello) ed insegnante Appassionato di tecnologia ed apprendista Sviluppatore iOS / macOS

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Create your website with WordPress.com
Get started
%d bloggers like this: