Twitter Facebook
Follow Grindr on
Download Learn More Gear Blog Events Press Contact Help

Grindr Blog

The official Grindr blog.
News and more from Team Grindr.

(Part 2 of 3) Tech Talk - iOS Development with Test Driven Development, Unit Testing, and Monitoring

Posted on June 18, 2014 by Grindr Team

Part (2 of 3) Tech Talk - iOS Development with Test Driven Development, Unit Testing, and Monitoring Code Metrics

[24:57] John >> Now, what I’ve done here is a too much feinty Ruby Regex. All it does is take all those settings we need, copying them into what we’ll be using and tells us it’s done. Simple enough but you may have any other use for that post install hook is catchall for we have downloaded, we have installed the pods, we have integrated them into the project, do you need to do anything else?

[25:25] John >> It’s important to know after you run a pod install, it will generate something called an Xcode workspace. For most people, if you create a new Xcode project, it will be using something an ‘xcproj’, an Xcode project file. Similar to a vcproject in Visual Studio, but we need to use multiple projects for this and what Cocoapods will do is automatically generate an Xcode workspace with everything we need for it and that is the file we use from now on. You can forget about that Xcode project for time immemorial.

[25:56] John >> Before we move on does anyone want to ask any questions about Cocoapods, why you might use it, how it might be used?


**** WE ARE HIRING **** is hiring for Senior Android Developers, iOS Developers and Principal Application Developers.  If you are interested in our careers, check out and email


[26:04] Audience >> [Inaudible]

[26:20] John >> Excellent question. So Cocoapods has an awesome website, You’d just like to find any other open framework. It can be difficult to just know it’s out there and then once you find that it’s out there, what’s a good one to use?

[26:35] John >> So I can go into and I can type in a generality of what I might be looking for when I type in ‘jasonkit’ here and it tells me what pods are available for that and all I have to do is plug in what I find here into my pod file.

[26:53] Audience >> [Inaudible]

[26:56] John >> How I pick one? That’s kind of up to the individual developer. It’s kind of like asking, ‘how do I determine what framework to use?’ Sometimes that just comes with the individual’s knowledge, sometimes it comes with trial and error. It’s kind of a personal choice.

[27:05] Lukas >> What I would suggest in those cases is that you do some sort of POCs, so if you’re evaluating let’s say three or four different frameworks, what we tend to do is we come up before with an evaluation criteria, so if you have ten things that you are looking for, you use a simple Excel spreadsheet. On the left hand side you got your criteria. You list all your frameworks at the top and then you can actually write little POCs using each and test the performance of whatever and you can have actual metrics behind which one you wanna use.

[27:47] Lukas >> Another thing you have to also look at is the [illegible] project how active it is. How many people you have committing to it. So you have whatever, Github or whatever repository you use it’s good to check how actively they are updating it, right?

[28:07] John >> Any other questions?

[28:10] Audience >> [Inaudible]

[28:18] John >> Which parameters are we talking about?

[28:21] Audience >> [Inaudible]

[28:27] John >> Correct. Yeah, these are hardcoded version numbers saying that these are the version numbers we want to use. Do not use anything later. Do not use anything older. Use only this version.

[28:36] Lukas >> You have a choice to actually select something that will automatically load the latest version that is better than this because, you know, what if they deprecated some stuff and all of a sudden you do a build and you’re stuff breaks. So, you need to a entire ceremony when it comes to dependency libraries, when it comes to updating and it’s usually a project that you want to schedule like, ‘ok there’s a newer library for this, let’s actually have a user story in your agile port for that so you can actually refresh it and test it for some stuff.

[29:13] John >> Very cool. Anybody else before we move on?

[29:18] John >> Awesome. So the next thing I would like to talk to you guys about is the Typhoon framework. The Typhoon framework is the dependency injection or IOC container we use within Grindr. There are a couple of IOC containers out there. You can find some small projects on Github but a lot of them are either not being supported any more or have very little adoption but Typhoon seems to have risen to the top.

[29:49] John >> If you look online for any number of times for an IOC container for objective C, you’ll see Typhoon’s name was said a number of times and it can be found at and from their website you can go their Github page. There is also a Cocoapods for Typhoon in order to integrate it in.

[30:08] John >> Let’s talk about some of the base things of using Typhoon.

[30:16] John >> So the base part about going into Typhoon is you have these logical groupings called, ‘Assemblies’. Assemblies are a group of related instance classes that can be used. These can be later fed into our creation factory and they are how we will access them later.

[30:38] John >> So you see I’ve created a class here called ‘inpoint log assembly’ and it’s derived from Typhoon assembly. All assemblies are derived from Typhoon assembly and we return to type id and we have various instances we will need to retrieve.

[30:54] John >> Within our involuntation file – wow this is big.

[31:02] John >> We return, we tell Typhoon, we give our class that what we need and then we have a chance to do some things to it before it’s returned.

[31:10] John >> So, for example, for the inpoint of log we’re telling it, ‘hey, this is the class we need and we need to inject it with this property which is also shared within the assembly which is our inpoint of log entry factory and we’re also setting the scope.
[31:22] John >> Setting the scope is a really important thing and it, kind of, can be difficult to understand for some things.

[31:30] John >> So there are four major types of scopes.


**** WE ARE HIRING **** is hiring for Senior Android Developers, iOS Developers and Principal Application Developers.  If you are interested in our careers, check out and email


[31:32] John >> The first one is TyphoonScopePrototype, which, anytime we request this object, we will get a brand new instance of it, everytime. No reviews.

[31:40] John >> The second and the default is the type object scope graph. If you do not specify the scope, this is what it will do by default.

[31:49] John >> Object scope graph, object graph scope, rather, forgive me, is an interesting choice and I’m a big fan of it.

[31:57] John >> Let’s say I’m in the view controller and I have some sort of delegate and there’s an object we need to pass between them.

[32:05] John >> Let’s say within our code base there’s also another view controller and delegate and similar object so we don’t really have a singleton that’s passed between but we won’t have to try and pass the object back and forth.

[32:16] John >> With Typhoon object scope graph, any time we request that object, we will receive the same one within a similar frame of execution so let’s say I have requested this view controller from Typhoon for whatever reason and I am working within it doing some things and I request something else within it, it will return me a new instance of that object.

[32:35] John >> I’ve now gone somewhere else within this execution stack, and I request that object again, it’s going to return me the same object, no different, the same one. Now, I’ve gone out of it and I’ve gone into a new view controller to do something else.

[32:49] John >> It will get a new instance and its delegate will get the same instance of that.

[32:54] John >> It’s very handy, very, very handy. The third type of scope you see here on the board is TyphoonScopeSingleton. Does everyone know what a singleton is? One and only one. Awesome.

[33:07] John >> There’s also a fourth type called weak singleton. A weak singleton is just like a weak property that zeroes out if nobody has a reference to it.

[33:20] John >> Awesome. So let’s talk about how these assemblies are used. Does anyone have any questions about the assembly before we move forward? Very cool.

[33:38] John >> So… [laughs]

[33:43] Lukas >> Gimme a while, I’ll change your resolution on the screen.

[33:46] John >> I’m not very good with… can I use built in retinal display?

[33:55] Lukas >> That should be a little bit better.

[34:02] John >> It’ll work. Alright, so you see here. I’m initializing a Typhoon component factory and I’m feeding it a number of assemblies. You see, I have… uh, it’s a little difficult to read here, forgive me.

[34:13] John >> This is an array of various assemblies we have within the project. And these are all the different assemblies it will manage.

[34:27] John >> Right.

[34:32] John >> Now’s a good time. Uh, let’s see. Bear with me for just one moment.

[33:43] John >> So, the question comes up, ‘how do we access this factory at runtime, when we want to retrieve things from it?’

[34:52] John >> So there is a protocol we can use for our classes called TyphoonComponentFactoryAware. Whenever any of our instances get initialized through Typhoon and we are TyphoonComponentFactoryAware, it will call setFactory on that object and we now have a reference to whatever factory we are associated with and are gonna retrieve anything from those assemblies, anything from our own assembly.

[35:20] John >> And it’s fantastic because all we have to do is cast the factory as an assembly and call the method as normal.

[35:30] John >> If we are not TyphoonComponentFactoryAware, there is a convenience method called makeDefault and we now have a default factory that we can access anywhere else. This is mainly used in legacy code.

[35:49] John >> Now I will touch on Typhoon again when we start talking about Unit Testing. Does anyone have any questions about basic use of Typhoon? Very cool.

[36:00] John >> Alright, so who here has done Unit Testing in iOS?

[36:08] John >> Alright. Tell me, how easy did you find it to use?

[36:13] Audience >> Just my first time so…

[36:15] John >> Fair enough. How about you, sir? How easy did you find Unit Testing to be done in Xcode?

[36:21] Audience >> It sucks.

[36:22] John >> Little bit, yeah. Little bit, yeah.

[36:37] John >>  So in order to do any Unit Testing in iOS, we create an instance of a class called xctestcase. We derive from it, we name it. We got a new class.

[36:48] John >> Unlike other classes that Xcode will create for you, we only have an implementation file. You can have a header file but there is typically no need for it.

[36:58] John >> There are one, two, three, four, five major methods to know in any Unit Test environment – are in any Unit Test, I should say.

[37:07] John >> We have a class level setup.

[37:12] John >> And this is a chance to do any type of initialization before any tests are run at the class level. This is run only once before all testing.

[37:24] John >>  We then have an instance level setup which we run before any test is run so this is our chance to setup any variables, get the environment a certain way and

it’ll be run once for every test case and then it will run any method that begins with the word, lower case ‘test’.

[37:47] John >> Objective C is definitely coating my convention and this is a good example of that.

[37:52] John >>  It will go through any method that begins with the word, ‘test’, no matter how it ends in range of those.

[37:58] John >> And then after each test it will run an instance level teardown which is our chance to clean up after ourselves and after all tests are run, we have another class level tear down.

[38:14] John >> Which is a chance to clean up anything we’ve left over before the test has a hole.

[38:23] John >> In order to run our Unit Test, we simply hit Cmd + U. It will open up the simulator for you.

[38:34] John >> I’ll begin writing our tests. You can see them pass in realtime if you have your pick of the code file open. And you’ll see at the end that all our tests have succeeded.

[38:44] John >> If we see a failing test…

[38:59] John >> It will tell us overall that all our tests have failed and it will give us where it is and whatever particular error message we have printed out.

[39:29] John >> Interesting.

[39:33] John >> So we determine whether or not the test has failed through various Xcode asserts. And we can see we have a number of different assertions.

[39:45] John >> Let’s say you’re brand new to Unit Testing, you don’t know any of these, this is all you have to do, actually the assert. This is just like a standard NSAssert. Any

false value will throw an exception and will have failed the test.

[40:05] John >>  One thing that’s incredibly important for at least our unit tests and arguably all unit tests is the ability to mock objects.

[40:13] John >> In any of our testing, we try to keep… we only want to be unit testing the class our tests are built for. We’re not doing integration testing, we’re not testing how it talks to any other class or even our backend, so we use what’s called, ‘mock objects’.

[40:29] John >> Are most people familiar with that term, mock objects? Very cool.

[40:33] John >> So, one deficiency that the Xcode testing suite has is it does not include any type of mock library. There’s no ability to create a mock object natively.

[40:44] John >> There are a number of libraries useful. The one we use in-house is called OCMock. Once again, this is another really popular library. You’ve seen lots of wide previews of this.

[40:57] John >> There are a couple… I’d like to touch on just the highlights of mock, some of the important methods that you would use for this.

[41:03] John >> So there are two different type of mock objects that you would use. One is a more traditional live object where it will only expect methods that you have told it to expect and if it receives anything else it throws an exception and it fails the test.

[41:17] John >> We have another type called a niceMock and we can, sidenote – we can mock those classes and protocols.

[41:24] John >>  A niceMock of a class or protocol will simply accept anything you throw at it but not return anything for those values, it will simply return NIL.

[41:36] John >> So, we also have the method called Expect on mock objects and the Expect method gives us… we tell this mock object, ‘hey I am expecting this method to be called when I later call another method’, which I’ll go over next, ‘if you did not receive this message, then you need to throw an exception an fail the test.’

[41:58] John >>  That the method we call at the end of our test in order to determine that all expected methods have been called is called, Verify.

[42:06] John >>  Let’s say we simply need to stub a method, there is a handy-dandy method called, Stub. We feed it what method we expect and we can also give it a return value in order for it to work nicely with anything else we might be testing.

[42:20] John >> The last thing I wanted to talk about in terms of Unit Testing is I wanted to touch on Typhoon again, and I wanted to talk about the TyphoonPatcher class.

[42:33] John >> If you’ll give me just one moment…

[42:38] John >> So, the great thing about the Typhoon factory and the assemblies is, we set everything up and we go to runtime and it works as expected. Let’s say I’m writing you a test and the class I’m unit testing uses some things from the Typhoon factory but we do not want something from the production assembly. This is a very common case. We don’t want to drag in any other classes that we’re not crisscrossing for unit testing in this particular test.

[43:07] John >> In order to do that, we create our factory with whatever assemblies we need just as we normally would and then we create an instance of TyphoonPatcher. What TyphoonPatcher does is that it gives us at runtime the ability to modify what the factory will return.

[43:21] John >> So you see what we’ve done here is, for the factory that we’ve created here, which includes just the inpoint log assembly. We patch in for the inpoint assembly, or for the inpoint log assembly factory, forgive me, method, we create a mock in point log entry factory told to expect and return some things and return that, and in order to give that information to the factory, we then attach it as a post processor and from that point forward, that factory will return our mock object that we expect.

[43:55] John >> Alright, and that I believe is all the code examples I have for unit testing Cocoapods in typhoon. Who wants… sure.

[44:03] Audience >> So within the unit testing, do you have like, development, QIs and production environment or what’s the...

[44:11] John >> Our unit tests are run on… Rudy can talk about those next on how unit tests are run as as part of our continuation integration environment, although we do run them locally before we commit.

[44:22] Luke >> So the idea of unit tests is that, that’s a suite that I’m expecting every developer to execute when they are making code changes in their local environment but it’s also part of our continuous integration environment which will talk about the build pipeline.

[44:37] Luke >> So whatever you checked into repository then that is checked out in continuous integration process and those unit tests are run again.

[44:51] John >> Certainly.

[44:52] Audience >> When testing [can’t decipher] can you run it outside of a simulator?

[44:59] John >> This simulator runs every time. It doesn’t run on Vice, it does not run outside of anything else. This simulator will open every time. You don’t see anything, the active self doesn’t run, it’s only run in unit tests.

[45:11] Audience >> [can’t decipher]

[45:16] John >> Keep in mind the Aphen simulator is not an emulator. You won’t get the speed of the device you’re using, but no, it won’t run from the Mac OS.

[45:28] John >> One caveat to that, unless you’re not doing an iOS app and you’re doing a Cocoa app, an actual Mac application, then they’ll run in its native environment.

[45:44] Audience >> Can you verify that certain values will pass to the given function?

[45:48] John >> Yes, through mock objects specifically, when we are expecting a particular method as argument for whatever method we’re expecting. We can give it either the value that we expected to get, and if it does not receive that value, the test will fail, or we can give it an instance of a class called, OCMArg, which you can do some fancy things with it to determine what it passed in, and you know, ‘does this pass certain variables’ or we can just take anything. Whatever passes, we don’t really care, just we have to have something to be able to accept it.

[46:26] Audience >> What if we need to call the method twice? How do we specify that?

[46:31] John >> You simply expect it twice and everytime you call and expect on it, it’s going to add another… it’s going to increment another counter and everytime it’s called it’s going to take one off the stack and then take it off the stack again if we’ve expected it twice and if it receives another one, then it fails because it was not expecting it that many times.

[46:56] Luke >> Cool. That thing we talked about Cocoapods, very important concept. Again, Cocoapods allows us to manage dependencies you saw a lot of libraries use. OCMock for example, so you want ot have an ecosystem that is managed through a framework. How we low these things by pulling framework again allows us to essentially delegate the factory functionality to a container which then is a byproduct of that allows us to essentially write unit tests and essentially isolate classes and write very specific test for those classes. If there are any dependencies between those classes, Typhoon can essentially patch those classes as OCMock objects. So we essentially can write very expected returns or very expected results for execution so our tests are very atomic and that’s very important. You don’t want to be testing fifty thousand dependencies as one, you want to unit test your one class. It’s very important distinction to make between unit testing and functional testing. Functional testing allows you to test a lot of different things. You’re focusing on business logic and you can use automation for that, and that’s a different topic, right?

[48:19] Luke >> So let me talk about built process and how it all kind of comes together with a delivery to production and I’m going to start by showing you where we were six months ago, and let me click on that. So… oh that’s not cool. Let me do that… and let me do that. Can you guys see that or no? Ok let’s do this.

[48:54] Luke >> So, this is essentially a diagram of how our release management process looked like. Let me very quickly focus on couple of key areas here. So if you’ve done iOS development, you know the concept of provisioning profiles, certificates. There are various different types of enterprise ad hoc productions of easy development.

[49:21] Luke >> So the way this used to work is that, we actually had our iOS developers maintaining all these certificates through iTunes Connect and then, they would do their development of the iOS machines and essentially… stop it. And they would essentially do their local testing by checking out these certificates that we would export to our Bitbucket repository to do their local essential build which was, you know we would run Xcode build, it would create, they would then sign it with the development provisioning profiles and their certificates and then, now we are able to test our device on the local machine, and they would check in that code into Bitbucket and then we would our Jenkins process to check that code out and we go through the same process again where we kind do an Xcode build, we create We sign it with any of those. We would have a publishing process that would create these IPAs for enterprise, Ad Hoc, and when we would use Hockeyapp to… we’re using YouTest for crowd-sourcing our tests and also our internal QA would then use the Ad Hoc builds to test, right?

[50:46] Luke >> Now, where it gets really funny is that now, ok I’ve done all my testing, it’s great and I’m ready to go into production. And then what happens, iOS developers check out the code to their local box and they go through this build process and on their local box they’re signing the IPA, I mean, APP with production of provisioning profiles and then we would upload to iTunes and then eventually it would get to the App store.

[51:19] Luke >> Now, when we talk about mature, you know, build management processes, there a lot of problems here. Number one, developers have access to production and certificates and provisioning profiling, allowing them to essentially deploy stuff to production. That’s not what I necessarily want, right?

[51:40] Luke >> Another thing is, what I’m testing here is really not what I’m deploying. There is no release candidate. Because, ok, I’m gonna go through all this effort, and now if my developer foobars their check out, or if someone between me doing testing, goes in here and checks in some additional code, then my check out will catch that and in fact, we had a couple of instances where we put stuff to production that we really didn’t want, right?

[52:11] Luke >> Amazingly so, this is probably majority of iOS development shops do it that way, and the reason why is because for whatever reason the tools are not there, the environment is not mature to automate this process, so we actually spend a lot of time and effort to build a release management pipeline to do it more like this, which is our new release process, which is, we have obviously these provisioning profiles, we have release management function that maintains these. There is a separate Bitbucket repository that governs these things with separate permissions so developers do not have access to that stuff. Developers can only do development on their iOS machines. They do this process, they use the development provision profiles to do unit testing and local testing, but once we get to the Jenkins box, Jenkins box actually checks out the code, checks out the provisioning profiles. Thus, through the build process it creates the versioned release candidate that is then checked into Nexus artifact repository before the signing process so that’s my release candidate. My APP is now a release candidate which I then can draw from and publish different environments. Once I certify then I use the same artifact from Nexus artifact repository. I’m going to check it out, sign it with production, provisioning profile and push to an App store and that is being done by my release management not my developers and that’s more in line with the mature enterprise build pipeline release management process. Quentions.

Go to:

Part (1 of 3) Tech Talk - iOS Development with Test Driven Development, Unit Testing, and Monitoring Code Metrics

(Part (3 of 3) Tech Talk - iOS Development with Test Driven Development, Unit Testing, and Monitoring Code Metrics

(Part 3 of 3) Tech Talk - iOS Development with Test Driven Development, Unit Testing, and Monitoring

Posted on June 18, 2014 by Grindr Team


Part (3 of 3) Tech Talk - iOS Development with Test Driven Development, Unit Testing, and Monitoring Code Metrics

[54:00] Audience >> Do they have to access into our iTunes’ account or two different [inaudible].

[54:07] Luke >> Essentially developers don’t have access to… yeah… ok?

[54:14] Luke >> So that’s really what you want. Key here is the release candidate and actually the usage of the artifact repository. Another thing here, Sonar will talk about code metrics. I’m very big about having empirical data about what my code is like, you know? It’s not a what a cool conversation, I want to see actual metrics, they’re standard metrics to measure the quality of my code. We’ll see that in a little bit. And also, I want to have automated testing as part of the process. So this build, everytime I do build, right? There is going to be unit testing. I’m gonna calculate my metrics. I’m gonna also run my automated fuctional testing, and that’s the future we’re working on that right now. We are doing a big push to have that done by the end of the summer. At the end of the day, everytime I do a build, I want to have a couple things get done. I want to calculate my code quality metrics and I want to fail the build if someone wrote something without the unit test, which means my code coverage percentage decreased. Also, you know, if my unit tests fail, obviously I want to fail the build and once on top of that once the artifact is done, I want to run my automated testing using functional tests, which means that now I’m actually testing the out functionality, right? And that should happen every night, at least. Right?


**** WE ARE HIRING **** is hiring for Senior Android Developers, iOS Developers and Principal Application Developers.  If you are interested in our careers, check out and email


[55:42] Luke >>  So now, developers can innovate, they can do whatever, they can test different frameworks. There’s a harness in place to fail fast, preventing me from pushing crap to production.

[55:54] Luke >>  So let’s talk about how it’s actually implemented, so Rudy, if you could take over and talk about our Jenkins setup and the pipeline.

[56:21] Rudy >> Alright, hi guys, so I’m now going through some of the stuff that Lukas and John went through in a little more detail about how the builds are happening, how the artifacts are stored, how do we track them, and what we do with them after our QA signs off on them, right?

[56:36] Rudy >> So Lukas briefly mentioned, now what we do is we basically store we build. So I’m gonna start with a couple of tools we’re actually using in-house. So we’re using Jenkins for all the build processes. So we use Jenkins to build the artifacts, we use Sonar, if you guys are familiar with it, for reporting. So what we do is after we build, we run the Code Quantmetrics and report it to sonar. So I’m gonna put up Sonar and show you guys the dashboard, it basically tells you where the violations are, what code is missing, unit test coverage, if they added five classes, and they haven’t unit tested obviously, the coverage percentage drops so we can flag it as a failure and ship it back to the dev team before it even makes it to QA.

[57:17] Rudy >> We’re using Nexus, it’s an artifact group repository. So what we do is, after we have the APP and the IPA artifacts, we store them there for tracking purposes with the appropriate Git hashes so we can trace it back to which Git commits they were associated with and we can troubleshoot them if we need to.

[57:35] Rudy >> And Hockeyapp obviously wants to, if you guys aren’t familiar with it, to doing distribution of the IPAs.

[57:43] Rudy >> I’m gonna start with the iOS sold with the Jenkins piece. So what we do is we basically use Jenkins to build the artifacts. What we have done is our new build process basically is the… we’re tying down a couple moving parts to together to make this happen so first we start off with Code Quantmetrics. Do they pass? If it does pass the code quality and we’re good with that, they have done proper unit testing, all unit tests pass, they haven’t broken anything existing. We go ahead and build the app. Once we build the app, all this stuff is still done in Jenkins. Jenkins will build the app after doing the reporting to Sonar and store the artifact over at the Nexus repository. If all that stuff is successful, which means we have a proper IPA, with the right provisional profile sign, we ship that artifact out to Hockeyapp for QA to be able to download and take a look at it.

[58:41] Rudy >>  So here is some of the individual artifacts we have nest code built. That does the actual build. We have the code quality metrics that we run right before we run the build. So I’m gonna open up one of the jobs where you can actually see how it’s tied together. So this is an example of it first, so first we do the code quality, if it’s successful we move on with Xcode build and if that is successful, we move on to storing it in the artifact. We jot that point where we contact Nexus and we ship out the artifact for storage and retrieval and we sign it with the proper artifact and we store those.

[59:19] Rudy >> The builds themselves take about five to ten minutes, so with all this stuff included, so from the time that a… go ahead sorry…

[59:27] Audience >> When does the get figured?

[59:30] Rudy >> So we have two versions of it, right. We have a calling system which we keep calling the repository every ten minutes. If there are new commits, we shake it all, we build it. Automatically.

[59:41] Audience >> On any branch?

[59:30] Rudy >> Yeah we have a few active branches that we have job set up for then we have master and kind of stable branch which is always production-like, plus, the other option is QA can always trigger it, if they want to. So, those are the options we have built, and then we’re also working on refining what the proper incremental timeframe should be for polling. Is it too frequent or not frequent enough, right? Because if the builds take about ten minutes you don’t want to keep pulling every five minutes. Then you have your jobs lined up and queued up and it’s going to bog down Jenkins itself. Any other questions?

[59:41] Audience >> [inaudible]

[59:30] Rudy >> So what we do is, with the code quality metrics… so after everything is done, you’ll see that the build is successful and this is basically Sonar. So what we do is we gather all the statistic from unit testing that job went through and we report it to this tool called Sonarcubes. What it does is that it basically aggregates it and you see the two green graphs for each branch right now? So, as we keep working on branches and committing code, the code quality is gone, and reports are getting generated and updated. If you start missing your commitment – let’s say threshold is set up, 90% for unit test coverage and you add ten new classes and you have no unit test, you’re obviously gonna cut off on that threshold and not meet it. So, we’re gonna flag it and once we ship the results here, you’re gonna see red and yellow and going all the way down to red obviously. We can actually enable triggers on these jobs, on this reporting, and say in this case we have actual coverage set to, if it’s greater than 65%, mark it as warning. If it’s greater than 70%, fail the build. So when the build fails it gets kicked back to the dev team and they’re like, ‘alright we gotta address this, this is not making it to QA’, because the next step is actually the build process and that never got triggers, so QA has nothing to test on.

[61:40] Luke >> Let me touch a little bit on that. So, from the perspective of having metrics on how well you’re doing as an engineering organization, this dashboard is very important to me, because we have multiple teams, we have multiple, you know, we got Android, we got iOS, we got backend, we got multiple branches. Setting rules and having these static code analyser rules that the team overtime develops, is very important, right? Because those are essentially your automated code reviews that happen every time you build, right? So as you’re developing these you’re getting better and better and better, so you can make this a part of your retrospective process if you want. You can make it more collaborative when developers are updating it all the time. The bottom line is that those are very important. Another metrics I look at is comment density. You wanna make sure that your code is well documented. I’m not big on writing separate documents from the code. I think the code in the class is very important, also, that’s on top of naming standards and conventions or whatever. Obviously, if you name your classes correctly, if you name your method correctly, you don’t need comments, but if you don’t, then it’s a problem.

[63:06] Luke >> Duplications, that’s very important for me as well. I don’t want copy and paste, right? It’s very easy if you are bending out stuff. You’d be like, okay, that looks cool, I’m just going to reuse it, copy and paste. If that goes up, I’m not a very happy person, right? And then, that’s something that we are trying to make work, Cyclomatic complexity is also a very good metrics to look at anything greater than 5, I usually reject the build, so that’s analogous with having god objects, right? If I have a class with ten thousand lines and does everything, cyclomatic complexity goes through the roof. I want nice, clean eco-position, right? So, cyclomatic complexity is one way to ensure that you have that. And then, our good old unit test coverage, so obviously, we want this number around 80%. You’re going to have 100% coverage. You may strive for that but I use 80-20 rule. 80% of code coverage is good. The bottom line is that if you set the threshold and if you setup your failure SLA, you’re kind of saying, ok, engineering, we have 60% now. From now on you cannot decrease this number, you can only go up, which means, now you’re forcing your developers to write code that has proper unit tests in place, and that’s part of your automated process. You don’t have to rely on humans to decode reviews of whatever. It’s part of your build pipeline.

[64:48] Rudy >> You can kind of have, yeah go ahead…

[64:51] Audience >> Does Sonar sit on top of your repository as if on github?

[64:56] Rudy >> No, this is a separate thing, this is a separate application. So what it does is, when Jenkins builds it, and it runs your unit test coverage, it actually publishes the numbers to Sonar. Sonar takes them, figures out what to do with them and does the whole graphing for you. So it’s kind of creating the link between Sonar and Jenkins unit test reporting and it just takes care of everything else.

[65:17] Audience >> So when Jenkins builds it, it sends it to Sonar?

[65:21] Rudy >> Sonar, yeah, It spills the reports to Sonar. Actually, I’m gonna do one thing for the demo right now and then I’ll carry on and you guys can see the difference so I’m gonna go to alerts, and you noticed how to coverage was. I wanted to fail, it gave me a warning at 80% and failed at 90%, right?

[65:52] Rudy >> So I’m just gonna trigger a build right now and we can continue.

[66:05] Audience >> So where is it building from? Is everybody ready with a common file? Is there a server somewhere?

[66:11] Audience >> No, this is checking out the code repository, right, a given branch and running the unit test coverage and doing the reports. If that is successful and Sonar doesn’t report a failure on that, which means you haven’t exceeded any of the thresholds, right, you’re still good, and then Jenkins carries on with doing the build, and then storing the artifact.

[66:29] Lukas >> We set up a pipeline, build pipeline templates through Jenkins, so there’s a wizard that we have that allows you to specify the branch. When you enter the branch, you enter the provisioning profile and that sets all those builds for you with all those multi-steps building process and then you just schedule and run. So let’s say I have new… I’ve been developing on my feature branches, I’ve been checking through Stable. I’m ready to, okay, I’ve got enough stuff to do a release candidate. I would cut the release branch, and then I would go to Jenkins, use my wizard to set up with this branch build and now our building gets correct. There’s also a stable branch that we’ll build against all the time. And that’s all, you can lock it.

[67:18]Rudy >> So the moving pieces where getting a build ready for QA is basically checking the code, having proper unit test coverage. We run the unit test coverage. If everything passes, we run the build, we create the artifact. If that is successful, we store it over at Nexus and that’s the raw APP file, right? So at that point we can sign it with any provision profile we need which is an issue that Lukas mentioned that they were having earlier, which is once QA signs off, the developer checks out the code again and does a brand new build and that’s all gone now. We have the actually raw APP that QA signed off on depending on which provisioning profile we use. If they sign off on that, we check out the exact APP file. We sign it with production and we ship that same thing off to Apple.

[68:00] Rudy >> Now, dev could have messed around with the branch as much as they wanted. That one build is going out, the one QA si

**** WE ARE HIRING **** is hiring for Senior Android Developers, iOS Developers and Principal Application Developers.  If you are interested in our careers, check out and email

Go to:

Part (1 of 3) Tech Talk - iOS Development with Test Driven Development, Unit Testing, and Monitoring Code Metrics

(Part 2 of 3) Tech Talk - iOS Development with Test Driven Development, Unit Testing, and Monitoring Code Metrics

Happy Birthday Grindr

Posted on March 25, 2014 by Grindr Team

Today we mark a milestone, Grindr turns 5. It’s been a long road and quite a trip, but in those 5 years we continued to grow and that’s largely in part to our users. We at Grindr want to thank you for making us the go-to app for finding gay men. Before Grindr, finding other gay men was a real challenge and we’re proud that we can provide the fastest and easiest way to meet a guy. In fact, our users have made millions of connections (and that number is only going up).

Since our inception in 2009, Grindr have taken the world by storm, experiencing explosive growth with iPhone, iPad, iPod touch, Android and BlackBerry users in 192 countries. To date, we have more than five million active monthly users on the app and their daily chat messages have topped 38+ million. Not to mention the number of Grindr photos sent has jumped to over 3.1 million a day, and we have  been downloaded more than 10 million times.

To celebrate our 5th birthday, we surveyed our massive user base to learn about the habits and trends among the guys that use the app. Users got personal and shared their meet-up history, favorite feature on a guy, and more. We took those survey results and made an infographic, take a look here.

And thanks again for helping make Grindr the app it is today.



The Grindr Best of 2013 Awards

Posted on December 9, 2013 by Grindr Team

Award season is officially underway! The votes have been counted and the many men of Grindr have made their voice heard in Grindr’s Best of 2013 poll. You have let us know who and what you view to be the best of the best is…so without further adieu, the winners are:

• Gay icon of the year: Neil Patrick Harris
• Straight ally of the year: Lady Gaga
• Best coming out story: Wentworth Miller
• Enemy of the LGBT community: Vladimir Putin
• Best song of 2013: “Same Love: by Macklemore and Ryan Lewis
• Best movie of 2013: Gravity
• Best TV show on air: “Modern Family”
• Most wanted man in the Grindr cascade: Channing Tatum
• Best comeback of 2013: Netflix
• Social blunders: Miley Cyrus’ twerking
• Biggest loss of 2013: Cory Monteith
• Next celebrity to come out: Taylor Lautner
• Hottest gadget of 2014: iPad Air
• Next state/country to legalize gay marriage: Florida
• Next celebrity train wreck: Justin Bieber
• What will become obsolete: Facebook

Congratulations to all the winners and Channing Tatum…should you ever need that Xtra subscription…it’ll be on us.

Meet the New Grindr Guys

Posted on October 17, 2013 by Grindr Team


Who’s hot and ripped and toned all over? The new Grindr guys of course.

After 780 submissions, more than 2,500 votes and an intense battle of the abs, Matthew Stehlik of Orlando, FL and Eric Angelo of Hollywood, CA have been crowned the winners of the Grindr Model Contest.

We were looking for the best guys to represent the new Grindr and boy did we find them (just check out their pictures below).

Eric (on the left) is a Texas native who now calls Hollywood home. He’s gogo dancer who doesn’t believe in shirts or pants and in his free time enjoys collecting old books and reading science fiction. Matthew (on the right) hails from Orlando by way of Pittsburgh. Matthew is a bartender and has modeled for several years - we can see why.
These two lucky lads will be brought to Los Angeles for a three night stay that includes some serious pampering, a killer new wardrobe from Kent Denim and Maor Luz, a Grindr photo shoot and a coming out party. Keep a look out for these guys to be featured in our upcoming Grindr ads.

Want to see more of Eric and Matthew? Check out their social media pages here:

Eric: Youtube & Facebook

Matthew: Instagram & Facebook