There was a period on an hour where we migrated our database to it’s own server in the same data centre.

This server has faster CPUs than our current one, 3.5GHz v 3.0GHz, and if all goes well we will migrate the extra CPUs we bought to the DB server.

The outage was prolonged due to a PEBKAC issue. I mistyped an IP in our new postgres authentication as such I kept getting a “permission error”.

The decision of always migrating the DB to it’s own server was always going to happen but was pushed forward with the recent donations we got from Guest on our Ko-Fi page! Thank-you again!

If you see any issues relating to ungodly load times please feel free to let us know!

Cheers,
Tiff

  • Tiff@reddthat.comOPM
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 months ago

    Alright, I’ve had a look, and we are now HA. We now have a lemmy front end and back end instances across two different servers with the third server being the new database server.

    This has reduced the number of connections through our one server to a more manageable number which seems to have helped with the lag spikes! From the logs it does look like the total number of lag spikes has disappeared but I’m not counting my chickens yet.

    With regards to regular requests. With the added server, on average we are looking at

    • comments: 1-3s
    • users: 2-5s
      Which again is still not ideal, but it’s something to work towards.

    One of our servers is also slower at responding in general. So I think we’ll decommission it at the end of this month and use the money to buy 1-2 more frontend servers, (that’s how big it is).