Category: Coding skills

Recursion / Recursive Functions

Recursion / Recursive Functions

[Image by Maurits Cornelis Escher]

A recursive function is a function which calls itself.

It sounds like a simple concept, but in practice it can be hard to get your head around.

Here’s a lovely example of a recursive function:

void ExplainRecursion () {  
    var explained = GetInput(“Do you understand recursion yet?”);  
    if (!explained) {    
        ExplainRecursion();  
    }
}

It’s particularly nice because it uses recursion to explain recursion. It calls itself – it’s a recursive function. Every time it calls itself, it asks the question “Has recursion been explained yet?” and if the answer is No it calls itself again. It keeps calling itself repeatedly until recursion is finally understood, at which point it quits. What might not be immediately obvious is that when it quits, it will still be inside the previous call, and will keep returning until it reaches the first time it was called. Like this:

ExplainRecursion()

—–>Do you understand recursion yet?

—–>No => ExplainRecursion()

———->Do you understand recursion yet?

———->No => ExplainRecursion()

—————>Do you understand recursion yet?

—————>No => ExplainRecursion()

——————–>Do you understand recursion yet?

——————–>Yes! => Exit function

—————>Exit function

———->Exit function

—–>Exit function

Exit function

It’s true that this is a slightly jokey example. It works if you see the source code, but if you only ever saw the output you wouldn’t learn much about recursion. You’d just be asked the same question repeatedly and probably get a bit angry.

Here’s another simple example:

void Factorial(int n) {
    if (n == 1) {
        return 1;
    } else {
        return n * Factorial(n-1);
    }
}

This function is calculating “n factorial”, which is written as “n!” in mathematical notation. In case you’ve forgotten or haven’t come across this concept before, it’s easiest to explain with a concrete example: 5! (“five factorial”) is a quick way of writing 5 x 4 x 3 x 2 x 1. So n! means take the number n and multiply it by all the whole positive numbers that lie between itself and zero.

The above function calls itself, which means that unless n is 1 (because 1 factorial just equals 1), it will return a result… to itself.

Don’t worry if you’re feeling lost already. It took me a long time to be able to look at recursive functions without getting an instant headache. I found it helpful to draw little diagrams. For instance, if we use the above function to calculate 5 factorial.

Remember, the answer we expect is 5 x 4 x 3 x 2 x 1.

The first time we enter the function, we call Factorial(5), so n = 5. The function returns 5 x Factorial(4), so it gets called again and this time n = 4. That returns 4 x Factorial(3), which returns 3 x Factorial(2), which returns 2 x Factorial(1). Factorial(1) just returns 1. Here’s a diagram to help:

Factorial(5) = 5 x Factorial(4)

—–>Factorial(4) = 4 x Factorial(3)

———->Factorial(3) = 3 x Factorial(2)

—————>Factorial(2) = 2 x Factorial(1)

——————–>Factorial(1) = 1

—————>= 2 x 1

———->= 3 x (2 x 1)

—–>= 4 x (3 x (2 x 1))

= 5 x (4 x (3 x (2 x 1)))

Normally when a function returns a value, it’s over. We can move on. But with a recursive function, we might just land back inside ourselves again, and then we might return a value to yet another version of ourselves. With complex functions, this can be hard to trace through in your head without losing track. It’s also horribly easy to get stuck in an infinite loop.

Hopefully this helped. If not, don’t worry. It’s not just you! It’s a notoriously head-twisty concept.

 

Liskov Substitution Principle

Some notes I made on the Liskov Substitution Principle (LSP) – which is the L in SOLID.

I can never remember the details of Liskov, which is why I made these notes.

These notes are also visible on my new personal wiki site, where I briefly describe the S, O, I and D in SOLID as well.

  • “objects in a program should be replaceable with instances of their subtypes without altering the correctness of that program.” See also design by contract.
  • Code that uses a base class must be able to substitute a subclass without knowing it
  • The specific functionality of the subclass may be different but must conform to the expected behaviour of the base class.
  • EXAMPLE:
    • So, I can call FemaleMammal.GiveBirth and I can substitute a cow into that – Cow.GiveBirth without breaking anything
    • No matter which version I choose, I should be confident that I will end up with a baby animal. Even though the specifics might be slightly different.
  • These are the principles which must be adhered to:
    • Contravariance with parameters
      • There should be contravariance between parameters of the base class’s methods and the matching parameters in subclasses. This means that the parameters in subclasses must either be the same types as those in the base class or must be less restrictive.
      • Parameters in subclasses are either the same or have more / add extra functionality
      • EXAMPLE:
        • FemaleMammal.GiveBirth(Mammal mothersBestFriend)
        • Cow.GiveBirth(Animal mothersBestFriend)
      • See Covariance and Contravariance
    • Covariance with return values
      • There must be covariance between method return values in the base class and its subclasses. This specifies that the subclass’ return types must be the same as, or more restrictive than, the base class’ return types.
      • Return values in subclasses are either the same or have less functionality
      • EXAMPLE:
        • Mammal baby = FemaleMammal.GiveBirth();
        • Bovine baby = Cow.GiveBirth();
      • See Covariance and Contravariance
    • Preconditions
      • The preconditions of a base class must not be strengthened by a subclass
      • A precondition of a class is a rule that must be in place before an action can be taken.
      • EXAMPLE:
        • FemaleMammal must not be a virgin and must never have had a Caesarian before giving birth
        • Cow must not be a virgin before giving birth
    • Postconditions
      • Postconditions cannot be weakened in subclasses.
      • Postconditions describe the state of objects after a process is completed.
      • EXAMPLE:
        • After female mammal gives birth, it must have an associated baby that is a mammal
        • After cow gives birth, it must have an associated baby that is a bovine (this is fine)
        • After cow gives birth, it must have an associated baby that is any kind of animal (this would not be fine)
    • Invariants
      • The invariants of a base class must not be changed by a subclass.
      • An invariant describes a condition of a process that is true before the process begins and remains true afterwards.
      • EXAMPLE:
        • FemaleMammal is female both before and after birth
        • Cow is female both before and after birth
    • History
      • New or modified members should not modify the state of an object in a manner that would not be permitted by the base class
      • EXAMPLE:
        • FemaleMammal gains a newly populated Baby property and the baby has a head
        • Cow gains a newly populated Baby property (this is fine)
        • Cow gains a newly populated CowUdders property (this is not fine)
    • Exceptions
      • A subclass should not throw exceptions that are not thrown by the base class unless they are subtypes of exceptions that may be thrown by the base class
      • EXAMPLE:
        • FemaleMammal.GiveBirth throws a BreachedBaby exception
        • Cow.GiveBirth throws a BreachedBabyCow exception (this is fine)
        • Cow.GiveBirth throws a NotEnoughGrass exception (this is not fine)
  • For instance, if you want a read-only file type, it should not inherit from File – and File should not have a Save method. Rather, the base File type has a Load method, and it has a subclass – WriteableFile – which has a Save method. Thus the base class cannot be saved. (otherwise you would have ReadOnlyFile which was not able to implement the Save method on its parent, and also left the object in a different state after calling Save than what you would expect – particularly if its Save method threw an exception which was not thrown by the base class)
Paired Programming: Useful Articles, Resources and Research

Paired Programming: Useful Articles, Resources and Research

During my talk at NDC London this week, I promised to publish a list of resources you can use if you are trying to persuade people of the efficacy of paired programming as a software development technique. Here it is!

(incidentally, the image is of Sal Freudenberg and me doing some remote pairing – we use Zoom and we find it works really well – I often forget we are not in the same room).

Simultaneous equations and algorithms: Finding the “best fit” of cells in a grid

Simultaneous equations and algorithms: Finding the “best fit” of cells in a grid

I got pinged by my colleague @j_f_green this morning with the following message: “Hey Clare. Hope all is well. Was wondering if I could be cheeky and get your input on a (small) thing I’m trying to solve? I am searching for an elegant solution and I’m starting to suspect the answer may involve maths.”

…and of course I was instantly interested.

The problem

It turned out to be a relatively simple problem, depending on your point of view. If I say “GCSE-level simultaneous equations” your reaction will tell you whether that’s simple for you or not (for people not in theUK, GCSEs are the exams our pupils take when they are 16 years old).

Here is the problem: We want to discover two whole numbers (aka positive integers) that will give us the height and width of a grid in cells, given the following constraints:

  1. We have a number, let’s call it capacity, which represents the minimum number of cells to fill the grid. We want at least that many cells. We’re happy if we have more, but not if we have fewer.
  2. We also have another number, we’ll call it aspectRatio, which is the aspect ratio of the grid. This is the relationship between height and width. So for instance, if the aspect ratio is 2:1, that means the width is twice the height (it’s twice as wide as it is high). To reduce that to a single number,  aspectRatio is 2.

width = 2 * height = aspectRatio * height

(aspectRatio will often be a decimal or a fraction, so for instance if width is half height, aspectRatio is 0.5, or if the ratio is 2:3 then aspectRatio = 2/3 = 0.66, and width = 0.66 * height)

A concrete example

Here is a concrete example: Let’s say we have a grid that is twice as wide as it is high, so aspectRatio = 2. Let’s also say that our capacity is 200: We want to fit at least 200 cells into this space. I’ve deliberately chosen an easy example, and the answer in this case would be a width of 20 and a height of 10:

◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️
◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️
◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️
◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️
◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️
◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️
◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️
◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️
◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️
◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️◼️

Apologies if that’s making your eyes hurt. Is it making your eyes hurt? It’s making my eyes hurt. I’ll try and make a better diagram later. It’s not even got the correct aspect ratio on the page…

If you enjoy attacking maths problems, look away now. Have a go at solving it before you read on. You may come up with a better solution anyway.

So how do we compute the answer?

One way would be a loop. Here is some code that would do the job:

var capacity = 1000;
var aspectRatio = 1000 / 800 = 1.25;

int width = 0;
int height = 0;

while (width * height < capacity)
{
height += 1;
width = RoundToNearestInteger(height * aspectRatio);
}

Assert.AreEqual(height, 29);
Assert.AreEqual(width, 36);

Here we have a grid that has at least 1000 cells:
36 * 29 = 1044

It has roughly the correct aspect ratio:
36 / 29 = 1.24
1000 / 800 = 1.25

The problem with this is that it does not scale well. The higher the numbers, the longer the loop. But fear not – maths to the rescue.

A simple use case

Let’s start with a simple use case: imagine the aspect ratio is 2:1, and the width is twice the height. Also let’s say we’re trying to fit 200 pixels into the space.
Let’s call width and height x and y. So:
x * y = 200
and
x = 2 * y = 2y
If x = 2y then x * y is the same as 2y * y, which is 2y²:
2y² = 200
y² = 100
y = 10
x = 20 (because it’s 2y)

A generalised solution

The first block above gives us our two simultaneous equations.

We can generalise the same approach for all numbers. Let’s call the capacity a, and the ratio b. Using our equations from above :
x * y = a
and
x = b * y
These are our simultaneous equations. Using the same logic as above, we substitute y for x and end up with:
by² = a
Which means that
y² = a/b
and
y = sqrt(a/b)
(where “sqrt” is square root).
Then once you’ve calculated y, you can calculate x:
x = b * y
The only missing piece of the puzzle is that we are dealing with inequality, not equality. So really it’s this:
y >= sqrt(a/b)
This just means that once you get a result, you round it up to the nearest integer. Same applies to the final substitution : if b * y is not a whole number, then round it up.
When you calculate b * y, you should use y BEFORE rounding.
So to finish, let’s stop being mathsy and start being softwarey. Let’s give our variables some meaningful names:
height >= sqrt(capacity / aspectRatio)
width = aspectRatio * height
If we test it with the example we used in the loop above, where aspectRatio = 1.25 and capacity = 1000, we get the same results:
height >= 28.28 [= sqrt(1000 / 1.25)]
width = 36 [= RoundUp(1.25 * height)]
So height = 29, width = 36, aspectRatio = 36 / 29 = 1.24, num cells = 29 * 36 = 1044
Now we can finish with some code:
var heightUnrounded = SquareRoot(capacity / aspectRatio);
var height = RoundUp(heightUnrounded);
var width = RoundUp(heightUnrounded * aspectRatio);
Compared to the loop we started with, this scales brilliantly. It doesn’t matter how big the numbers are, the compute time is basically the same. It has a constant computational complexity, otherwise known as O(1)*. The solution above using a loop gets slower and slower depending on how big the numbers are. It has a computational complexity of O(n), so it’s not as good.
Hopefully that makes sense and I didn’t over-simplify. All feedback welcome.
* I couldn’t remember how to express O(1) so I initially guessed at O(k), because I was thinking of k as a constant. Sadly I was wrong, because I rather like the thumbs-up “Ok” concept.
Martin Fowler webinar, new Refactoring book

Martin Fowler webinar, new Refactoring book

These may well be the notiest notes I’ve ever published, but just in case they’re of any use to anyone… if nothing else they may whet your appetite for the new edition of Martin Fowler’s Refactoring book.

I confess I never read it first time round (not on purpose, just there are so many books in the world and so little time…), so I’m looking forward to reading it this time. It hasn’t come out yet in the UK but should be some time in the next few weeks. Mine is already on order [drums fingers impatiently].

So anyway, Martin did a webinar today on the topic of his new book, and here are my notes:

Refactoring should be done via small changes

  • v small semantics-preserving changes
  • So small they’re not worth doing on their own
  • String together small changes to make a big change

When adding new functionality:

  • Alternate between refactoring and adding functionality
  • Often it’s easier to add new functionality if you refactor first
  • Balance between adding new functionality and refactoring – it’s a matter of judgement

V1 of Martin’s Refactoring book vs v2

  • Some things that were included in the first book he decided not to include I the second
    • Eg Unidirectional vs bidirectional – not important
  • Some things he thought were too trivial for the first book, he has now changed his mind about and included
    • Eg moving statements within a function
  • Most notable changes = no longer centrally object-oriented
    • “Extract function” used throughout instead of extract method
    • One of the refactorings is around the choice between OO and non-OO
    • The most visible thing here is the choice of language (javascript)
    • But language is not necessarily relevant anyway
  • OO vs functional
    • He doesn’t see that as a huge shift
    • OO should still have functional elements – referentially transparent
    • They are overlapping paradigms, not distinct

Favourite refactorings?

  • “Split phase”
    • Hunk of computation which can sensibly be divided into two phases with a data structure communicating between them
    • Eg parsing – separate the tokenising out – deal with a series of tokens instead of a stream of text
    • But if you split a loop, what if you are introducing performance problems? See below…

Performance and refactoring

  • Split loop is something people often worry about because it will run the loop through twice
  • Most of the time it doesn’t matter – a push for clarity will not change the performance of the code
  • Most of the time by refactoring you open up an opportunity for performance improvements you would never have noticed otherwise
  • Then again you should run performance tests frequently
  • Not necessarily with every build, but every day or two
  • But most micro-changes have no impact on performance
  • You will only find real performance impact by performance testing

Recommendations for other books?

Refactoring architecture:

  • Always have to ask, how do you make the smallest possible change?
  • Eg extracting a data-intensive microservice from a larger system
  • Most of the moves are happening within the monolith BEFORE you start firing up the external system
  • Then you move stuff in really small pieces
  • Things that change for similar reasons should be together – you need to see the relationship between them

Branching / trunk-based development

  • Develop on a branch and merge at the end, with branch being long-lived – days, weeks or months
    • Still a really common pattern
  • But if something has been refactored, you don’t find out until you merge – this can blow up on you
    • It means you become scared to refactor! There’s a disincentive.
  • Continuous integration: Integrate with each other fully at regular intervals – at least once a day
  • Every day you receive everyone else’s code changes and push your own changes back
  • Removes the big painful merge problem – not saving up for a big horrible crunch
  • Adage of Apollo – it hurts if you do it more often (?)
    • Sorry, I wrote this down but I’ve since Googled it and I don’t really know what it’s about – I may have misheard!
  • Open source development:
    • Actually this CAN be a good place to use branches, because everyone working separately and you can’t guarantee all changes should be merged
  • People call it trunk-based development but “continuous integration” is a better name because that’s what it’s really about
  • People worry about continuous integration but it’s less painful than they think
    • If you do integration every day it becomes easier and less painful until you stop even noticing

Responsibility

  • Keep code base in healthy state
  • If not you are stealing from colleagues, clients and employers!
  • But no point saying you should do it for moral reasons, that won’t motivate people to do it
  • It’s economic: Want to get more changes out more quickly
  • It’s professional: We’re being paid to produce features rapidly
  • It’s not up to managers or business: They don’t know how to keep code healthy – that’s what they pay us to do

Refactoring and design patterns – are they incompatible?

  • With a lot of patterns, you shouldn’t introduce them right away cos they’re not appropriate and you don’t require the full strength of them yet, particularly the more complicated ones
  • Introduce patterns as a result of refactoring: Refactor towards them
  • This was recognised by the original Design Patterns authors (Gamma, Helm, Johnson, Vlissides)
  • Also Josh Kerievsky: Refactoring to Patterns

Code smells

  • Code smell of the week!
    • “This week we will look at long methods” – whenever we find them we try to refactor them

When not to refactor!

  • Code that is never touched
    • Sitting behind an API, working quite happily, doesn’t need to change – doesn’t matter if it’s a mess!
  • Don’t clean code all in one go – just a little bit more every time you go in
    • Over time that will concentrate your efforts where they are most needed – ie areas modified most frequently

Refactor as a step in the red-green-refactor cycle

  • You should only refactor when tests are green
  • Sometimes that means you have to remove a failing test, refactor, then put the failing test back in again
  • Ralph Johnson, John Brant, Don Roberts wrote the first Smalltalk refactoring book (A refactoring tool for smalltalk?)

If you don’t have tests you can’t refactor?

  • “Probably safe” refactorings? Interesting route to go? Not a great thing for most people to focus on, requires a lot of expertise
  • For most people you should have tests as well – after all, how do you know whether you’re breaking what’s currently there?

Final takeaway:

  • Small steps, commit frequently.
  • Then you can always roll back.
Hack Manchester 2018 – The Challenges

Hack Manchester 2018 – The Challenges

I couldn’t find anywhere all the Hack Manchester 2018 challenges / prizes were listed together, so I’m putting them all on one page for easy comparison.

! I’m not sure if they’re all there yet – there might be more to come, so keep an eye on the Hack Manchester site (scroll down to the “THE CHALLENGES” – yellow section just before the schedule).

Follow the links below to find out about the sponsors and see more detail / explanation about the challenges:

WebApp UKBest in Show. (Our team came second for Best in Show last year. Just sayin’.)

MediaBurst: “The most bizarre use of ClockworkSMS!.”

Dunnhumby: “use data to make the consumer experience… less than optimal.

Centre For Biological Timing: “make technology work for our body clocks rather than against them … hack your body clock!

Avecto: “create a digital solution to help protect or inform friends and family of the issues around Cyber Security.

AND Digital: “Can clever tech help turn passenger misery into passenger delight?

GCHQ: “think locally: utilise the fantastic open data sources available about Greater Manchester to improve the safety of the general public.

Screen Shot 2018-10-12 at 14.07.51

Another Interesting Conference Numbers Problem

Another Interesting Conference Numbers Problem

The other day I posted this problem.

In the process of chatting about it, an associated mathematics problem came up, about permutations and probabilities.

To remind you of the basic problem: We have 450 conference attendees. We have one day of workshops. There are five sessions throughout the day, and five workshops to attend. We want every attendee to attend every workshop. The workshops are being repeated throughout the day: In every session, there are 10 rooms available. There are five workshops, and each workshop is duplicated. So for instance, workshop 1 will be run in rooms A and B, workshop 2 in rooms C and D, etc. They are repeated throughout the day, so one attendee might do them in the order 15423, and another might do 42315.

There will be 45 people in each workshop. The nub of the problem is that the organisers want as much cross-pollenation as possible. People do not get to choose what order they do the workshops: they are told which room to be in at what time. The organisers would like each attendee to meet as many new people as possible in every single session.

So here’s the maths question: What is the minimum average number of repeat people that attendees have to meet in each session? By “repeat people” I mean “people they have already attended a previous session with.”

There is one extra constraint: There is one group of 30 people (labelled “FRIENDS” in the spreadsheet below) that have to be kept together in every session. So they only get 15 people in every session that they may not have met before, ie for them the min value of Repeat People is 30 in every session. This also has an impact on all the other people in the other sessions.

However this is still an interesting problem even without the extra 30-people constraint.

So in session one, Repeat People = 0. In every one of the workshops in session 1, attendees are in a brand new group of people that they have never attended this conference with before.

So what is the minimum average value of Repeat People for session 2? I’m specifying average, because it would be possible to keep things pure for at least one attendee, where they meet 44 new people in every single session. But that would have an adverse effect on everyone else. So I’m aiming for everyone to have roughly the same number of Repeat People per workshop.

I have two theories, one of which I can prove:

  1. Apart from in Session 1, the average Repeat People is definitely greater than zero. You can’t avoid meeting attendees more than once (if you’re spreading the load evenly) (I can prove this one).
  2. The average number of Repeat People will get gradually higher throughout the day. The first solution I came up with maintains a constant number for Repeat People after session 1, but it’s non-optimal (I think).

Here is a way of visualising the problem:

Gina-workshop-empty-2

I’ll be very interested to hear any solutions!

A Solution to the Interesting Conference Numbers Problem

A Solution to the Interesting Conference Numbers Problem

The problem is described here. You could approach the problem as a coding kata, which I’d love to do at some point – I didn’t have the time on this occasion. I ended up just solving it using pen and paper.

My solution is below. It won’t be the only solution, there are probably better ones out there that rely more on randomisation and less on patterns (or just use better patterns).

Don’t scroll down if you want to have a go at it yourself first!

 

 

 

So, I found a solution which works pretty well. I’m pretty pleased with it, and it’s nice and simple and neat.

The participants are split into groups of 15. The FRIENDS are in two groups of 15, both marked “FRIENDS” in the spreadsheet below. All the other groups are non-FRIENDS (well they might be friends, who knows? But they still don’t get to stay together – we’re mean like that).

It does mean that the people move in groups of 15, and they will stay in those groups. Not that they will necessarily know it. Because of the numbers, no matter what you do, they will keep encountering some people repeatedly (the maths of this is a separate problem which I’ve blogged about here). If you shuffle the list and assign the non-FRIENDS to 28 groups of 15, hopefully they will already be mixed up with people they don’t know, so they needn’t be aware that they have been grouped. I’ve labelled the groups B1 to O1 and B2 to O2.

What it does mean is that each group of 15 never encounters another group of 15 twice – they meet a different two groups in every session. So everybody meets 30 new people in every session (I haven’t actually proved this but I’m pretty confident).

(Since I came up with this, I’ve realised I can improve it quite a lot so that people don’t have to stay with exactly the same 15 people – scroll down to the bottom to see this improvement).

So, there are three groups in every session, like this (this is my actual solution):

Gina-Workshop-2

Here’s how I did it:

First, split the non-FRIENDS people into two halves, 225 in each half. The first half has groups B1 to O1, and the second group has B2 to O2.

For now, we will also split the FRIENDS into two groups, A1 and A2 (this was my “Aha” moment, all made possible because of the fact that you have duplicate workshops in each session).

So for each half, we have letters A to O. For the first session, we just spread them across the 5 workshops: ABC in the workshop 1, DEF in workshop 2… etc.

For the second session, the first group in each triplet just shuffles along to the next workshop. So A is now in workshop 2, D is in workshop 3, etc.

We keep doing this all the way down the sessions, for the first group in each triplet.

For the second group in each triplet (B, E, H, K and N), instead of shuffling them on by 1, we shuffle them 2 workshops along. So, B was in workshop 1 in session 1, then we add 2 so they are workshop 3 for session 2, then add 2 again so they are in workshop 5 for session 3, keep going (wrapping around) and they are in workshop 2 for session 4 and workshop 4 for session 5. Do this for all the second groups (B, E, H, K and N).

For the third group in each triplet (C, F, I, L, O), add 3 on each time. So group C has the following workshops: 1, then 4, then 2, then 5, then 3.

Do this for all of those third groups.

This is what you get:

Gina-workshop-diag

Now, at this point you have a problem: Your FRIENDS group have been split into two groups of 15. But ooh! Look! You did the same thing for A1 to O1 as you did for A2 to O2!

A1 and A2 are always in the same workshops at the same time. So you can move A2 into the same workshop as A1, and swap another group out into the spot left blank by A2.

I then renamed both A1 and A2 to “FRIENDS”, and that’s how I arrived at the spreadsheet pasted above.

POST SCRIPT (also see separate maths problem here):

Since I came up with this, I’ve realised I can improve it quite a lot so that people don’t have to stay with exactly the same 15 people:

The “FRIENDS” group mess it up a bit, but in most cases you’ll have parallel groups moving independently in duplicate workshops, eg when group C1 is doing workshop 3, group C2 will be doing the other duplicate of workshop 3.

Well… If each group of 15 was split into two sub groups of 7 and 8, then they could shuffle around and meet each other. Also this could be done dynamically.

So, for instance, you have C1a (7 people), C1b (8 people), C2a (7 people) and C2b (8 people).

In session 2, swap C1b and C2b so now only 7 (or 8) people stay together. Do the same with all groups (except those affected by the “FRIENDS” group).

In session 3, put C1a with C2a and C1b with C2b. This adds up to 14 and 16, so somebody will have to switch sub groups. In sessions 4 and 5 you could split them again, but this time randomly. As long as all the Cs are doing the same workshops, you can split them how you like. Ultimately they will all meet each other so they’ll still have repetition with 15 people overall, but it won’t be the same 15 people in every workshop. And they’ll still get 30 brand new people in every workshop.