# Queues, Queue workers and Tasks (Asynchronous architecture)
Yesterday we looked at how we can use HTTP based APIs for communication between services. This works well until you need
to scale, release a new version or one of your services goes down. Then we start to see the calling service fail because
it’s dependency is not working as expected. We have tightly coupled our two services, one can’t work without the other.
There are many ways to solve this problem, a light touch approach for existing applications is to use something called a
circuit breaker to buffer failures and retry until we get a successful response. This is explained well in this blog
by [Martin Fowler](https://martinfowler.com/bliki/CircuitBreaker.html). However, this is synchronous, if we were to wrap
our calls in a circuit breaker we would start to block processes and our user could see a slowdown in response times.
Additionally, we can’t scale our applications using this approach, the way that the code is currently written every
instance of our `generator` api would be asking
the `requestor for confirmation of receiving the string. This won’t scale well when we move to having 2, 5, 10, or 100 instances running. We would quickly see the `
requestor` being overwhelmed with requests from the 100 generator applications.
There is a way to solve these problems which is to use Queues. This is a shift in thinking to using an asynchronous
approach to solving our problem. This can work well when the responses don’t need to be immediate between applications.
In this case it doesn't matter if we add some delay in the requests between the applications. As long as the data
eventually flows between them we are happy.
![Queues, producers and Consumers](./images/day84-queues.png)