From 96871dc44215701101d272e3a6369d4edede8a6f Mon Sep 17 00:00:00 2001 From: gopi Date: Thu, 9 Oct 2014 02:00:26 -0700 Subject: [PATCH 1/6] first take --- blog.md | 40 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 40 insertions(+) create mode 100644 blog.md diff --git a/blog.md b/blog.md new file mode 100644 index 00000000000..ff4cfde8072 --- /dev/null +++ b/blog.md @@ -0,0 +1,40 @@ + # About me at Rackspace + I am a software Developer in Test III and have been with Rackspace since 2011. I started off with writing automated selenium tests for legacy Control Panel, now replaced by [Reach](https://mycloud.rackspacecloud.com). Later grabbed the oppurtunity to venture into the world of API testing, for the Cloud Integration team, for services such as Sign Up, Accounts and Billing. Intrigued by Billing, continued with testing billing for Compute and Cloud Block Storage. Life happened, moved from the San Antonio office to the San Francisco office. Soon joined the AutoScale team, worked for over a year and half writing tests and setting up the test infrastructure for AutoScale. Currently am on the Monitoring team and getting ramped up. + + + # Mimic - Mocks not driven by tests + + AutoScale allows users to automate scaling servers up or down based on the traffic load, or a schedule. To do so AutoScale depends on other services such as Identity (for authentication and impersonation), Compute (to scale servers up or down) and Load Balancers (to load balance the servers created) and AutoScale can be successful product, only if all of the services it relies on are functional and reliable at all instances. + + I had taken up the task of writing tests for AutoScale and had envisioned writing postive and negtaive tests the following categories of tests, + + - Functional tests - to verify the API contracts (Eg.: verifies the responses of all the API calls) + - System Integration tests - to verify the integration between Autoscale and its dependent systems (Eg.: User requested to scale up by 2 servers. Were the 2 servers built successfully and assigned to a Load balancer) + +The functional tests were straight forward. And everything was going well with writing the system integration tests, except when it came to writing negative tests. There was no way to simulate the error conditions. So, such negative tests looked like [this](https://github.com/rackerlabs/otter/blob/master/autoscale_cloudroast/test_repo/autoscale/system/group/test_system_group_negative.py#L109-114), + +``` +def test_system_create_delete_scaling_group_server_building_indefinitely(self): + """ + Verify create delete scaling group when servers remain in 'build' state + indefinitely + """ + pass +``` +I continued to write other tests, verifying the tests against the dependent systems, to ensure the tests were not flaky. However, as the test coverage grew, and test werr being run as part of the CI/CD process, we began to notice the following, +- Tests were taking very long to complete (due to server build times, internet etc) +- Tests would begin to fail, not because of AutoScale but because it ran into an error condition in one of the dependent systems. And there was no way to simulate such error conditions to be able to deal with it within the AutoScale code base. +- We were using up way too many resources (well, its AutoScale!) +- Tests were becoming a burden and nobody fancied running them during development (including me, when developing more tests!) + + + + + + + + + + + + From 8c9a377622faecd4e183170b1507f6a5746e34fc Mon Sep 17 00:00:00 2001 From: lekhajee Date: Thu, 9 Oct 2014 02:11:17 -0700 Subject: [PATCH 2/6] correcting user --- blog.md | 40 ---------------------------------------- 1 file changed, 40 deletions(-) delete mode 100644 blog.md diff --git a/blog.md b/blog.md deleted file mode 100644 index ff4cfde8072..00000000000 --- a/blog.md +++ /dev/null @@ -1,40 +0,0 @@ - # About me at Rackspace - I am a software Developer in Test III and have been with Rackspace since 2011. I started off with writing automated selenium tests for legacy Control Panel, now replaced by [Reach](https://mycloud.rackspacecloud.com). Later grabbed the oppurtunity to venture into the world of API testing, for the Cloud Integration team, for services such as Sign Up, Accounts and Billing. Intrigued by Billing, continued with testing billing for Compute and Cloud Block Storage. Life happened, moved from the San Antonio office to the San Francisco office. Soon joined the AutoScale team, worked for over a year and half writing tests and setting up the test infrastructure for AutoScale. Currently am on the Monitoring team and getting ramped up. - - - # Mimic - Mocks not driven by tests - - AutoScale allows users to automate scaling servers up or down based on the traffic load, or a schedule. To do so AutoScale depends on other services such as Identity (for authentication and impersonation), Compute (to scale servers up or down) and Load Balancers (to load balance the servers created) and AutoScale can be successful product, only if all of the services it relies on are functional and reliable at all instances. - - I had taken up the task of writing tests for AutoScale and had envisioned writing postive and negtaive tests the following categories of tests, - - - Functional tests - to verify the API contracts (Eg.: verifies the responses of all the API calls) - - System Integration tests - to verify the integration between Autoscale and its dependent systems (Eg.: User requested to scale up by 2 servers. Were the 2 servers built successfully and assigned to a Load balancer) - -The functional tests were straight forward. And everything was going well with writing the system integration tests, except when it came to writing negative tests. There was no way to simulate the error conditions. So, such negative tests looked like [this](https://github.com/rackerlabs/otter/blob/master/autoscale_cloudroast/test_repo/autoscale/system/group/test_system_group_negative.py#L109-114), - -``` -def test_system_create_delete_scaling_group_server_building_indefinitely(self): - """ - Verify create delete scaling group when servers remain in 'build' state - indefinitely - """ - pass -``` -I continued to write other tests, verifying the tests against the dependent systems, to ensure the tests were not flaky. However, as the test coverage grew, and test werr being run as part of the CI/CD process, we began to notice the following, -- Tests were taking very long to complete (due to server build times, internet etc) -- Tests would begin to fail, not because of AutoScale but because it ran into an error condition in one of the dependent systems. And there was no way to simulate such error conditions to be able to deal with it within the AutoScale code base. -- We were using up way too many resources (well, its AutoScale!) -- Tests were becoming a burden and nobody fancied running them during development (including me, when developing more tests!) - - - - - - - - - - - - From be290edc92fff4fbb0d98a0e32f94917250d5d29 Mon Sep 17 00:00:00 2001 From: lekhajee Date: Thu, 9 Oct 2014 02:15:55 -0700 Subject: [PATCH 3/6] blog content --- blog.md | 39 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 39 insertions(+) create mode 100644 blog.md diff --git a/blog.md b/blog.md new file mode 100644 index 00000000000..2d8ced5cd6c --- /dev/null +++ b/blog.md @@ -0,0 +1,39 @@ +## Mimic - Mocks not driven by tests + + I was a QE on the Rackspace Auto Scale team and would like to take you through the experiences and learnings I have had with testing Auto Scale, and how the ability to test it improved drastically using Mimic. + + [Rackspace Auto Scale](http://docs.rackspace.com/cas/api/v1.0/autoscale-devguide/content/Overview.html) is a web service that allows users to automate scaling servers up or down based on user defined conditions, or a schedule. To do so, Auto Scale depends on other services such as [Identity](http://docs.rackspace.com/auth/api/v2.0/auth-client-devguide/content/Overview-d1e65.html) - for authentication and impersonation, [OpenStack Compute](http://docs.rackspace.com/servers/api/v2/cs-devguide/content/ch_preface.html) - to scale servers up or down and [Load Balancers](http://docs.rackspace.com/loadbalancers/api/v1.0/clb-devguide/content/Overview-d1e82.html) - to load balance the servers created. + + Successfully testing Auto Scale meant testing not just the features of Auto Scale itself, but that it is consistent irrespective of any failures in the upstream systems. I had taken up the task of writing tests for Auto Scale and had envisioned writing positive and negative Functional and System Integration tests. + + Functional tests to validate the API contracts and behavior of Auto Scale. An example of a positive functional test is, to verify that the [create scaling group](http://docs.rackspace.com/cas/api/v1.0/autoscale-devguide/content/POST_createGroup_v1.0__tenantId__groups_autoscale-groups.html) API call returns the expected response and that a scaling group is successfully created. An example of a negative functional test is, to verify that, create scaling group results in a response code 400, when the minimum servers specification(minEntities) for the group is set to greater than the maximum servers allowed(maxEntities). + + System Integration tests to validate the integration between Auto Scale and its dependent systems. An example of a positive System Integration tests is, to verify that when a user creates a scaling group and scales up by two servers, the servers are created successfully and assigned to the desired load balancers. An example of a negative System Integration test, is to verify the behavior of Auto Scale when a server being created goes into an error state. + +Automating the positive and negative functional tests was simple and straight forward. However, the positive system integration tests were slow and flaky because of the time it took to provision an instance, or, due to network issues. Also, there was no way to automate the negative system integration tests, as it was impossible to simulate the dependent systems' error conditions. Hence, such negative tests began to look like this, + +``` +def test_system_create_delete_scaling_group_all_servers_error(self): + """ + Verify create delete scaling group when all the servers go into + error state + """ + pass +``` + +I continued to write tests and I was happy as our test coverage was improving. Soon they were integrated as a gate, in our CI/CD process. But, this began to slow down the merge pipeline as the test suite would take over 3 hours to complete and was often unreliable. It would fail whenever it came across an irreproducible error in a dependent system, such as, a server going into an error state or remaining in a build state indefinitely. Also, the teams owning the dependent services were alarmed, by the sudden splurge in our usage of their resources, and had begun to complain (well, its autoscale!). + +The tests were not helping and instead had become a burden. Nobody fancied running them locally during development (including me - when developing more tests!). + +This needed to change! We needed feedback within a few minutes and not hours, without compromising on the test coverage. We needed a way to be able to reproduce the upstream failures and reliably verify that Auto Scale can handle such failures. And this needed to be done in a cost efficient manner, without using up all the resources of the upstream systems for our testing purposes. + +All of these factors led me to write [Mimic](https://github.com/rackerlabs/mimic), an API-compatible mock service for Identity, OpenStack Compute and Load Balancers. Mimic provides dynamic, stateful responses based on templates of expected behavior for the supported services. It is backed by in-memory data structures rather than a potentially expensive database and is easy to set up, and speeds up the feedback. Mimic eliminates the use of production resources for testing, enables offline development and is cost and time efficient. Also, Mimic supports error injection by analyzing the [`metadata`](http://docs.rackspace.com/servers/api/v2/cs-devguide/content/Server_Metadata-d1e2529.html) sent within the json request body while [creating a server](http://docs.rackspace.com/servers/api/v2/cs-devguide/content/CreateServers.html) or [load balancer](http://docs.rackspace.com/loadbalancers/api/v1.0/clb-devguide/content/POST_createLoadBalancer_v1.0__account__loadbalancers_load-balancers.html), and generates various error conditions. + +We now have integrated Mimic within our development environment as well as in our CI/CD process and have tests running against it. By doing so, test run time has reduced exponentially. To be precise, it has gone from being over 3 hours to less than 3 minutes!! Our test coverage of the negative system integration tests went up, as we are able to replicate various error conditions. Everybody is happy to run tests frequently and get immediate feedback. The teams from the dependent services are happy to know that we are not using up their resources at a large scale, for testing purposes. + +Also, unlike other mock frameworks, using Mimic does not involve including many extra lines of code that crowd the tests. Tests just need to pass in the `metadata`, only in case of a negative scenario, and mimic will process and return the expected response. This makes test code easy to read and understand. +Also, changes in upstream system's behaviors, do not alter the tests. + +Mimic has only grown since I first wrote it for the purposes of Auto Scale testing. Thanks to Glyph Lefkowitz and Ying Li, it now has a plugin architecture allowing others to implement mocks of other Rackspace and Openstack API services. It allows for control of time! has a 100% test coverage, and so much more. Check it out at https://github.com/rackerlabs/mimic. + +Our goal now is to find more use cases for its use, make testing API services painless and efficient. We welcome contributions, feedback, thoughts and ideas. Come join us develop Mimic, or talk to us on ##mimic on irc.freenode.net. \ No newline at end of file From 0a06f33a4ae740c2cbe81a3a207bc8f3ce069cb3 Mon Sep 17 00:00:00 2001 From: lekhajee Date: Fri, 17 Oct 2014 15:22:29 -0700 Subject: [PATCH 4/6] reviews --- blog.md | 31 +++++++++++++++++++++++-------- 1 file changed, 23 insertions(+), 8 deletions(-) diff --git a/blog.md b/blog.md index 2d8ced5cd6c..d3d73bca67b 100644 --- a/blog.md +++ b/blog.md @@ -1,14 +1,22 @@ ## Mimic - Mocks not driven by tests - I was a QE on the Rackspace Auto Scale team and would like to take you through the experiences and learnings I have had with testing Auto Scale, and how the ability to test it improved drastically using Mimic. +### Why am I writing this? + + I was a QE on the Rackspace Auto Scale team. I would like to take you through the experiences and learnings I have had with testing Auto Scale, and how the ability to test it improved drastically using Mimic. + +### What is Rackspace Auto Scale? [Rackspace Auto Scale](http://docs.rackspace.com/cas/api/v1.0/autoscale-devguide/content/Overview.html) is a web service that allows users to automate scaling servers up or down based on user defined conditions, or a schedule. To do so, Auto Scale depends on other services such as [Identity](http://docs.rackspace.com/auth/api/v2.0/auth-client-devguide/content/Overview-d1e65.html) - for authentication and impersonation, [OpenStack Compute](http://docs.rackspace.com/servers/api/v2/cs-devguide/content/ch_preface.html) - to scale servers up or down and [Load Balancers](http://docs.rackspace.com/loadbalancers/api/v1.0/clb-devguide/content/Overview-d1e82.html) - to load balance the servers created. - Successfully testing Auto Scale meant testing not just the features of Auto Scale itself, but that it is consistent irrespective of any failures in the upstream systems. I had taken up the task of writing tests for Auto Scale and had envisioned writing positive and negative Functional and System Integration tests. +### What's so hard about testing Auto Scale? + + Successfully testing Auto Scale meant testing not just the features of Auto Scale itself, but also that it is consistent irrespective of any failures in the upstream systems. I had taken up the task of writing tests for Auto Scale and had envisioned writing positive and negative Functional and System Integration tests. + +### More about the Auto Scale test suite - Functional tests to validate the API contracts and behavior of Auto Scale. An example of a positive functional test is, to verify that the [create scaling group](http://docs.rackspace.com/cas/api/v1.0/autoscale-devguide/content/POST_createGroup_v1.0__tenantId__groups_autoscale-groups.html) API call returns the expected response and that a scaling group is successfully created. An example of a negative functional test is, to verify that, create scaling group results in a response code 400, when the minimum servers specification(minEntities) for the group is set to greater than the maximum servers allowed(maxEntities). + Functional tests are used to validate the API contracts and behavior of Auto Scale. An example of a positive functional test is, to verify that the [create scaling group](http://docs.rackspace.com/cas/api/v1.0/autoscale-devguide/content/POST_createGroup_v1.0__tenantId__groups_autoscale-groups.html) API call returns the expected response and that a scaling group is successfully created. An example of a negative functional test is, to verify that, create scaling group results in a response code 400, when the minimum servers specification(minEntities) for the group is set to greater than the maximum servers allowed(maxEntities). - System Integration tests to validate the integration between Auto Scale and its dependent systems. An example of a positive System Integration tests is, to verify that when a user creates a scaling group and scales up by two servers, the servers are created successfully and assigned to the desired load balancers. An example of a negative System Integration test, is to verify the behavior of Auto Scale when a server being created goes into an error state. + System Integration tests are used to validate the integration between Auto Scale and its dependent systems. An example of a positive System Integration tests is, to verify that when a user creates a scaling group and scales up by two servers, the servers are created successfully and assigned to the desired load balancers. An example of a negative System Integration test, is to verify the behavior of Auto Scale when a server being created goes into an error state. Automating the positive and negative functional tests was simple and straight forward. However, the positive system integration tests were slow and flaky because of the time it took to provision an instance, or, due to network issues. Also, there was no way to automate the negative system integration tests, as it was impossible to simulate the dependent systems' error conditions. Hence, such negative tests began to look like this, @@ -21,19 +29,26 @@ def test_system_create_delete_scaling_group_all_servers_error(self): pass ``` +### EVERYTHING IS SO SLOW!!! + I continued to write tests and I was happy as our test coverage was improving. Soon they were integrated as a gate, in our CI/CD process. But, this began to slow down the merge pipeline as the test suite would take over 3 hours to complete and was often unreliable. It would fail whenever it came across an irreproducible error in a dependent system, such as, a server going into an error state or remaining in a build state indefinitely. Also, the teams owning the dependent services were alarmed, by the sudden splurge in our usage of their resources, and had begun to complain (well, its autoscale!). The tests were not helping and instead had become a burden. Nobody fancied running them locally during development (including me - when developing more tests!). This needed to change! We needed feedback within a few minutes and not hours, without compromising on the test coverage. We needed a way to be able to reproduce the upstream failures and reliably verify that Auto Scale can handle such failures. And this needed to be done in a cost efficient manner, without using up all the resources of the upstream systems for our testing purposes. +### And Now: A New Dawn, A New Hope + All of these factors led me to write [Mimic](https://github.com/rackerlabs/mimic), an API-compatible mock service for Identity, OpenStack Compute and Load Balancers. Mimic provides dynamic, stateful responses based on templates of expected behavior for the supported services. It is backed by in-memory data structures rather than a potentially expensive database and is easy to set up, and speeds up the feedback. Mimic eliminates the use of production resources for testing, enables offline development and is cost and time efficient. Also, Mimic supports error injection by analyzing the [`metadata`](http://docs.rackspace.com/servers/api/v2/cs-devguide/content/Server_Metadata-d1e2529.html) sent within the json request body while [creating a server](http://docs.rackspace.com/servers/api/v2/cs-devguide/content/CreateServers.html) or [load balancer](http://docs.rackspace.com/loadbalancers/api/v1.0/clb-devguide/content/POST_createLoadBalancer_v1.0__account__loadbalancers_load-balancers.html), and generates various error conditions. -We now have integrated Mimic within our development environment as well as in our CI/CD process and have tests running against it. By doing so, test run time has reduced exponentially. To be precise, it has gone from being over 3 hours to less than 3 minutes!! Our test coverage of the negative system integration tests went up, as we are able to replicate various error conditions. Everybody is happy to run tests frequently and get immediate feedback. The teams from the dependent services are happy to know that we are not using up their resources at a large scale, for testing purposes. +We now have Mimic integrated within our development environment as well as in our CI/CD process and have tests running against it. By doing so, test run time has reduced exponentially. To be precise, it has gone from being over 3 hours to less than 3 minutes!! Our test coverage of the negative system integration tests went up, as we are able to replicate various error conditions. Everybody is happy to run tests frequently and get immediate feedback. The teams from the dependent services are happy to know that we are not using up their resources at a large scale, for testing purposes. + +### Why is this different from other mock services? + +Also, unlike other mock frameworks, using Mimic does not involve including many extra lines of code that crowd the tests. Tests just need to pass in the `metadata`, only in case of a negative scenario, and mimic will process and return the expected response. This makes test code easy to read and understand. Also, changes in upstream system's behaviors, do not alter the tests. -Also, unlike other mock frameworks, using Mimic does not involve including many extra lines of code that crowd the tests. Tests just need to pass in the `metadata`, only in case of a negative scenario, and mimic will process and return the expected response. This makes test code easy to read and understand. -Also, changes in upstream system's behaviors, do not alter the tests. +### Onwards! -Mimic has only grown since I first wrote it for the purposes of Auto Scale testing. Thanks to Glyph Lefkowitz and Ying Li, it now has a plugin architecture allowing others to implement mocks of other Rackspace and Openstack API services. It allows for control of time! has a 100% test coverage, and so much more. Check it out at https://github.com/rackerlabs/mimic. +Mimic has only grown since I first wrote it for the purposes of Auto Scale testing. Thanks to Glyph Lefkowitz and Ying Li, it now has a plugin architecture allowing others to implement mocks of other Rackspace and Openstack API services. It allows for control of time! has a 100% test coverage, and so much more. Other teams within Rackspace have started to adopt Mimic. One our developers Eddy Hernandez from the [Cloud Intelligence](http://www.rackspace.com/blog/get-more-from-your-data-with-rackspace-cloud-intelligence) team calls it "Developing and running Cloud Intelligence in airplane mode!". Our goal now is to find more use cases for its use, make testing API services painless and efficient. We welcome contributions, feedback, thoughts and ideas. Come join us develop Mimic, or talk to us on ##mimic on irc.freenode.net. \ No newline at end of file From 055e00c92b873c74a27c289e84efaec07a04943d Mon Sep 17 00:00:00 2001 From: lekhajee Date: Fri, 17 Oct 2014 15:24:17 -0700 Subject: [PATCH 5/6] headers - courtsey Alex --- blog.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/blog.md b/blog.md index d3d73bca67b..cb41f7b4823 100644 --- a/blog.md +++ b/blog.md @@ -1,18 +1,18 @@ ## Mimic - Mocks not driven by tests -### Why am I writing this? +#### Why am I writing this? I was a QE on the Rackspace Auto Scale team. I would like to take you through the experiences and learnings I have had with testing Auto Scale, and how the ability to test it improved drastically using Mimic. -### What is Rackspace Auto Scale? +#### What is Rackspace Auto Scale? [Rackspace Auto Scale](http://docs.rackspace.com/cas/api/v1.0/autoscale-devguide/content/Overview.html) is a web service that allows users to automate scaling servers up or down based on user defined conditions, or a schedule. To do so, Auto Scale depends on other services such as [Identity](http://docs.rackspace.com/auth/api/v2.0/auth-client-devguide/content/Overview-d1e65.html) - for authentication and impersonation, [OpenStack Compute](http://docs.rackspace.com/servers/api/v2/cs-devguide/content/ch_preface.html) - to scale servers up or down and [Load Balancers](http://docs.rackspace.com/loadbalancers/api/v1.0/clb-devguide/content/Overview-d1e82.html) - to load balance the servers created. -### What's so hard about testing Auto Scale? +#### What's so hard about testing Auto Scale? Successfully testing Auto Scale meant testing not just the features of Auto Scale itself, but also that it is consistent irrespective of any failures in the upstream systems. I had taken up the task of writing tests for Auto Scale and had envisioned writing positive and negative Functional and System Integration tests. -### More about the Auto Scale test suite +#### More about the Auto Scale test suite Functional tests are used to validate the API contracts and behavior of Auto Scale. An example of a positive functional test is, to verify that the [create scaling group](http://docs.rackspace.com/cas/api/v1.0/autoscale-devguide/content/POST_createGroup_v1.0__tenantId__groups_autoscale-groups.html) API call returns the expected response and that a scaling group is successfully created. An example of a negative functional test is, to verify that, create scaling group results in a response code 400, when the minimum servers specification(minEntities) for the group is set to greater than the maximum servers allowed(maxEntities). @@ -29,7 +29,7 @@ def test_system_create_delete_scaling_group_all_servers_error(self): pass ``` -### EVERYTHING IS SO SLOW!!! +#### EVERYTHING IS SO SLOW!!! I continued to write tests and I was happy as our test coverage was improving. Soon they were integrated as a gate, in our CI/CD process. But, this began to slow down the merge pipeline as the test suite would take over 3 hours to complete and was often unreliable. It would fail whenever it came across an irreproducible error in a dependent system, such as, a server going into an error state or remaining in a build state indefinitely. Also, the teams owning the dependent services were alarmed, by the sudden splurge in our usage of their resources, and had begun to complain (well, its autoscale!). @@ -37,17 +37,17 @@ The tests were not helping and instead had become a burden. Nobody fancied runni This needed to change! We needed feedback within a few minutes and not hours, without compromising on the test coverage. We needed a way to be able to reproduce the upstream failures and reliably verify that Auto Scale can handle such failures. And this needed to be done in a cost efficient manner, without using up all the resources of the upstream systems for our testing purposes. -### And Now: A New Dawn, A New Hope +#### And Now: A New Dawn, A New Hope All of these factors led me to write [Mimic](https://github.com/rackerlabs/mimic), an API-compatible mock service for Identity, OpenStack Compute and Load Balancers. Mimic provides dynamic, stateful responses based on templates of expected behavior for the supported services. It is backed by in-memory data structures rather than a potentially expensive database and is easy to set up, and speeds up the feedback. Mimic eliminates the use of production resources for testing, enables offline development and is cost and time efficient. Also, Mimic supports error injection by analyzing the [`metadata`](http://docs.rackspace.com/servers/api/v2/cs-devguide/content/Server_Metadata-d1e2529.html) sent within the json request body while [creating a server](http://docs.rackspace.com/servers/api/v2/cs-devguide/content/CreateServers.html) or [load balancer](http://docs.rackspace.com/loadbalancers/api/v1.0/clb-devguide/content/POST_createLoadBalancer_v1.0__account__loadbalancers_load-balancers.html), and generates various error conditions. We now have Mimic integrated within our development environment as well as in our CI/CD process and have tests running against it. By doing so, test run time has reduced exponentially. To be precise, it has gone from being over 3 hours to less than 3 minutes!! Our test coverage of the negative system integration tests went up, as we are able to replicate various error conditions. Everybody is happy to run tests frequently and get immediate feedback. The teams from the dependent services are happy to know that we are not using up their resources at a large scale, for testing purposes. -### Why is this different from other mock services? +#### Why is this different from other mock services? Also, unlike other mock frameworks, using Mimic does not involve including many extra lines of code that crowd the tests. Tests just need to pass in the `metadata`, only in case of a negative scenario, and mimic will process and return the expected response. This makes test code easy to read and understand. Also, changes in upstream system's behaviors, do not alter the tests. -### Onwards! +#### Onwards! Mimic has only grown since I first wrote it for the purposes of Auto Scale testing. Thanks to Glyph Lefkowitz and Ying Li, it now has a plugin architecture allowing others to implement mocks of other Rackspace and Openstack API services. It allows for control of time! has a 100% test coverage, and so much more. Other teams within Rackspace have started to adopt Mimic. One our developers Eddy Hernandez from the [Cloud Intelligence](http://www.rackspace.com/blog/get-more-from-your-data-with-rackspace-cloud-intelligence) team calls it "Developing and running Cloud Intelligence in airplane mode!". From 0bca821872aae0124fde2b6f414df67a926a86d0 Mon Sep 17 00:00:00 2001 From: lekhajee Date: Mon, 20 Oct 2014 12:34:52 -0700 Subject: [PATCH 6/6] more review --- blog.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/blog.md b/blog.md index cb41f7b4823..84ed6de4604 100644 --- a/blog.md +++ b/blog.md @@ -14,9 +14,9 @@ #### More about the Auto Scale test suite - Functional tests are used to validate the API contracts and behavior of Auto Scale. An example of a positive functional test is, to verify that the [create scaling group](http://docs.rackspace.com/cas/api/v1.0/autoscale-devguide/content/POST_createGroup_v1.0__tenantId__groups_autoscale-groups.html) API call returns the expected response and that a scaling group is successfully created. An example of a negative functional test is, to verify that, create scaling group results in a response code 400, when the minimum servers specification(minEntities) for the group is set to greater than the maximum servers allowed(maxEntities). + Functional tests are used to validate the API contracts and behavior of Auto Scale. An example of a positive functional test is, to verify that the [create scaling group](http://docs.rackspace.com/cas/api/v1.0/autoscale-devguide/content/POST_createGroup_v1.0__tenantId__groups_autoscale-groups.html) API call returns the expected response and that a scaling group is successfully created. An example of a negative functional test is, to verify that, create scaling group results in a response code 400, when the request is malformed. - System Integration tests are used to validate the integration between Auto Scale and its dependent systems. An example of a positive System Integration tests is, to verify that when a user creates a scaling group and scales up by two servers, the servers are created successfully and assigned to the desired load balancers. An example of a negative System Integration test, is to verify the behavior of Auto Scale when a server being created goes into an error state. + System Integration tests are used to validate the integration between Auto Scale and its dependent systems. An example of a positive System Integration test is, to verify that when a user creates a scaling group and scales up by two servers, the servers are created successfully and assigned to the desired load balancers. An example of a negative System Integration test, is to verify the behavior of Auto Scale when a server being created goes into an error state. Automating the positive and negative functional tests was simple and straight forward. However, the positive system integration tests were slow and flaky because of the time it took to provision an instance, or, due to network issues. Also, there was no way to automate the negative system integration tests, as it was impossible to simulate the dependent systems' error conditions. Hence, such negative tests began to look like this, @@ -45,7 +45,7 @@ We now have Mimic integrated within our development environment as well as in ou #### Why is this different from other mock services? -Also, unlike other mock frameworks, using Mimic does not involve including many extra lines of code that crowd the tests. Tests just need to pass in the `metadata`, only in case of a negative scenario, and mimic will process and return the expected response. This makes test code easy to read and understand. Also, changes in upstream system's behaviors, do not alter the tests. +Mimic isn't a generic mocking system where the tests have to provide their own scripted responses, as Mimic knows about how the services it mimics are supposed to behave. Tests just need to pass in the `metadata`, only in case of a negative scenario, and Mimic will process and return the expected response. This makes Mimic a repository of known responses including error conditions. Hence, using Mimic does not involve including many extra lines of code that crowd the tests. And changes in upstream system's behaviors, do not alter the tests. This makes test code robust and easy to read. #### Onwards!