Skip to content

Conversation

yloiseau
Copy link
Member

This proposal presents a simple and unified interface specification for
tests (unit, functionals) in Golo, in order to maximize reuse of the
individual components.

This proposal presents a simple and unified interface specification for
tests (unit, functionals) in Golo, in order to maximize reuse of the
individual components.
Copy link
Member

@jponge jponge left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like what I read 👍

@danielpetisme
Copy link
Member

IMHO, the extractor step is a bit overkill.
When you start with a test framework you simply learn the API convention (testXXX, shoulYYY, fileSpec.ab, etc.) and stick to it. Can you elaborate on that part? Do you have examples?

For me the the language should provide the minimal mechanism to run tests. IMHO, we should

  • create test objects (ie. Test, Suite) and invoke a run method
  • a runner with a addTest method to collect tests and a run method to trigger the execution of all the test registered
  • a NOOP reporter.

Then the more elaborate APIs, runners, reporters, assertions could be 3rd-party libs.

By default, the golo test command could return the numbers of test failures (0 if everything is ok).

Thoughts?

@yloiseau
Copy link
Member Author

yloiseau commented Jan 4, 2017

@danielpetisme the extractor is defined by the test framework, not by the user. In the case of your PR eclipse-archived/golo-lang#308, it's the work done in TestCommand by treeFiles, listFiles, pathToClass (which cover my walking and compiling steps and are generic enough to be provided in the standard lib, as mentioned line 111) and in loadSpecification which covers my inspecting and building steps and are specific to the conventions of your framework (i.e. a function named spec).

The purpose of the extractor is to abstract this task, as it can be framework specific. For instance, in a framework based on conventions (e.g. execute all public functions named test_*), while the walking and compiling steps would be roughly the same, the inspecting and building ones would be very different: create a suite by inspecting all methods whose name is matching, maybe even using the golo doc as the test description, manipulating the IR instead of only using reflection on compiled modules.

However, the user only use the framework and tells the golo test command which extractor to use (the one of the corresponding framework).

About the golo test returning the number of failed tests, that's what I had in mind (see examples in the entry point section System.exit(reporter(runner(extractor(path)))) since the reporter returns the number of errors). Maybe it's not clear enough.

About what is provided by the language, I agree that we can provide a minimal runner and reporter. However, the main purpose of this proposal is to defined minimal API with low coupling. Given the proposed API for the suite and test, we already have the necessary tools to create the suite.

Given a provided runner and reporter, a minimal test file can be:

module TestFoo

import Foo

function main = |args| {
  gololang.testing.Utils.run(list[
      ["First suite", list[
        ["sub suite 1", list[
          ["test that we can create a foo", {
            let f = Foo.Foo(42)
            require(f oftype Foo.types.Foo.class, "error creating a foo")
          }],
          ["test that a foo has an answer", {
            let f = Foo.Foo(42)
            require(f: answer() == 42, "error in the answer value")
          }]
        ],
        ["sub suite 2", list[
          ["test that is ok", {}],
          ["test that fails", { throw AssertionError("fails") }]
        ]
      ],
      ["Second suite", list[
        ["a foo should plop", {
          require(Foo.foo() == "plop", "error testing foo")
        }],
        ["a foo should bar for 42", {
          require(Foo.bar("answer") == 42, "error testing bar")
        }]
      ]
    ])
  )
}

where gololang.testing.Utils::run is just:

function run = |suites| {
  System.exit(defaultReporter(defaultRunner(suites)))
}

However, writing the suite in this way is not really appealing. That's precisely the job of the suite definition and extraction functionalities provided by a test framework 😄

Implementations for a runner and reporter can be quite simple, such as:

function defaultRunner = |suites| -> list[
  [desc, match {
    when isClosure(test) then trying(test)
    otherwise defaultRunner(test)
  }]
  foreach desc, test in tests
]

function defaultReporter = |results| {
  var err = 0
  foreach desc, result in results {
    if result oftype Result.class {
      print(desc + " " + result: either(
        |v| -> "OK",
        |e| -> "FAIL: " + e: message()))
      if result: isError() {
        err = err + 1
      }
    } else {
      println("# " + desc)
      err = err + defaultResult(result)
    }
  }
  return err
}

or use multithreading, futures, lazy evaluation, color the console output, and so on.

@danielpetisme
Copy link
Member

What do we wait to validate the spec?

@yloiseau yloiseau merged commit 5f7cf88 into golo-lang:master Feb 15, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants