Hacker News new | past | comments | ask | show | jobs | submit login

> I don’t care how that function is implemented, only that it returns a boolean

Let's take your example of "shouldExecute". I assume your unit test operates on some inputs (with the values provided inside the unit test itself, naturally), and "shouldExecute" has potentially some nontrivial logic in it. Say, it reads value of some environment variable and if it is right, returns "true".

Now there are two possibilities:

a) your test inputs are made up. For example, you never set the environment variable value, and coerce "shouldExecute" to always return "true" anyway. The problem with such a test is that it is fiction, not a test of an actual production behavior. Sure, you will test what would happen if the logic would determine it should execute given no environment variable is set, but this will never happen in production. In production lack of environment variable would result in "shouldExecute" returning "false" and you should test for *that*. So you do care about the details of "shouldExecute", because you need to be aware that it returns "true" only if appropriate environment variable is set. And if you don't care if the "shouldExecute" returns true or false, then why do you call it in the first place? What are you even testing? I hope your "shouldExecute" doesn't have any side effects you depend on, and I do hope you do not use code coverage as a goal unto itself.

Plus, such a test cannot be used as "executable documentation", because the inputs are simplified to the point of irrelevance, and cannot help in understanding actual program behavior.

b) your test inputs are realistic, and reflect actual production behavior. This means you will have properly set up the environment variable value so that "shouldExecute" returns true. With that, the test is similar to actual production behavior, has good bug-catching ability, and can serve as executable specification. But here again you will have to worry about "shouldExecute" implementation.

---

Let me offer you another example. Imagine we want to test:

Compile(SourceCode sourceCode) { ValidateSyntax(sourceCode); /* complex logic post-processing the sourceCode here */ }

I could say here "I am testing Compile, and I do not care if ValidateSyntax throws an exception; I will just coerce it to return without throwing an exception".

And then I can write a test that takes as input some simple sourceCode like "blablabla" and claim I have somehow tested the "/* complex logic post-processing the sourceCode here */". But this is silly, in reality such fake input would never survive the validation, and thus testing what happens after is just waste of time. Hence I need to ensure sourceCode passes the *actual* validation, and hence I need to understand the logic inside the "ValidateSyntax" method.

It gets worse. Now imagine we have thoroughly tested the complex logic while using mocked out ValidateSyntax, but now the syntax has changed and thus the validate method behaves differently. If I have a mock of the "ValidateSyntax" method I might still feel good - the test is green, the coverage is still high. Except it is all I lie. I run "Compile" on some production data and it blows up. Why? Because the test was working against a "ValidateSyntax" mock that was mocking the result of validating obsolete syntax, which, now, with the changed behavior, would actually not pass validation and blow up. Basically my test was telling my "Compile" method works IF I assume syntax is still the obsolete one, but it doesn't tell me how my code behaves with the current syntax.

So every time program behavior changes, I need to go through all my mocks that were duplicating that behavior, and see if they are still faithfully reflecting it. Otherwise I risk ending up with a green test suite that tests nonexistent, impossible program executions.




I’m not quite sure what you are trying to say here.

In both of the examples you give you seem to be assuming that ‘not caring about the functionality of function X in function Y’ means ‘not caring about function X at all’.

This is untrue. You should test both.

If you want shouldExecute to have an env variable, you have a separate test for that, one for both the positive and negative scenario.

In the same sense you have a separate test for Compile and Validate, but in the Compile test you may not care to do the validation. At the very least you should have a test for Validate separate from Compile.


Naturally, I agree that the shouldExecute / ValidateSyntax methods should be tested too, but this is not the whole story.

Let me clarify my point with a more precise example.

Let's say we want to test this:

ParseUrlDomainAndPath(string url) { string validatedUrl = Validate(url); (domain, path) = /* inline logic to extract domain and path from the validatedUrl */; return (domain, path) }

Now I have few options to unit test it:

1. I could mock out "Validate" to return the "url" passed as input, call in the test ParseUrlDomainAndPath("___:||testDomain|testPath"), and assert it returns (testDomain, testPath). Such test might even pass, if the inline logic for extracting domain and path is not too fussy about the delimiters. In such case I will end up with a test telling me "if you call ParseUrlDomainAndPath with URL that has | instead of / and some weird schema of ___, it will successfully return". This is a lie. If you call it in production, it will fail validation. So the test gives you false impression on how the system behaves. You are testing how parsing of domain and path works on URL that has | instead of /, but this won't happen in production. Thus, you are testing made-up behavior. Waste of time.

2. As 1., but instead I could mock out Validate to return whatever validatedUrl I want, completely disregarding url. In such case, what is the point of even having Validate involved in the test here? Instead, let's refactor to functional core - let's take the inline logic, capture it in a method called "ExtractDomainAndPathFromUrl(string validatedUrl)" and pass validatedUrl to it directly. No need to deal with Validate at all, no need to mock anything, no need to fixup any broken mocks. Great!

3. As 1., but as input I pass "foo". The mocked out Validate returns "foo", and we are now trying to extract domain and path from it. This will either throw an exception or return garbage. So our test now has failed. But we don't care about this failure, at all. In actual production behavior the domain and path extraction logic would never even execute, because Validate would fail beforehand. So here we have reverse situation to 1.: In 1. we have a test telling us production will work while in reality it won't (as Validate will throw), and here we "found a bug" (the test fails) that doesn't matter as it is impossible to happen in production. Again, waste of time.

So with 1. and 3. being waste of time, the only option that is left is 2. You have one test that a) checks that Validate behaves correctly and b) checks that ExtractDomainAndPath behaves correctly and c) checks that both of these methods collaborate with each other correctly (aka "mini integration test").

You could now argue that this is wrong, I should have one test for Validate, one test for ExtractDomainAndPath and one test for ParseUrlDomainAndPath that mocks out Validate and mocks out ExtactDomainAndPath. But let me ask: why? In such case you lose the benefit of having "mini-integration" test. When you test ParseUrlDomainAndPath with everything mocked out you test an empty husk of logic, only if the calls are made in proper sequence. You cannot even really assert anything meaningful! (aka "mockery" anti-pattern). You end up with 3 tests instead of 1 and cr*pton of unreadable, brittle mock logic. And chances are, the one test of ParseUrlDomainAndPath that doesn't use any mocks of internal business logic, will already cover significant parts of ValidateUrl and ExtactDomainAndPath, and so will reduce the need of additional "corner case" tests testing these methods directly. Having 1 test with proper in-process dependencies instead of having mocks and 3 tests is just win all over he place: less testing logic, better ability to catch bugs (due to bonus mini-integration testing and realistic data), executable specification aiding in program comprehension, less brittle tests, no misleading green tests, and no made-up, irrelevant failing tests.

There is one more, very important benefit: you can step such test with a debugger and see how all the components collaborate with each other, on real data. But if you use mocks and fake oversimplified data, you get very shallow slices of code and cannot reason about anything relevant.


What you are testing is that results of shouldExecute() are followed.

This is verified either that your passed in mock function also got called with the right parameters, or that the return value shows that the function was executed.

If your shouldExecute function is bad, it's tests should fail, not some other file




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: