Sunday, May 19, 2024
0
rated 0 times [  0] [ 0]  / answers: 1 / hits: 858  / 1 Year ago, sat, may 20, 2023, 10:43:14

I am a newbie when it comes to OpenStack with MAAS and AutoPilot. I would like to create my own private cloud with Ubuntu 14.04LTS and MAAS 1.9.



My goal is to have a decent set up I can use to deploy a pretty heavy Java Spring Tomcat application with MySQL, Solr, RabitMQ along with MongoDB and/or Couch in the mix for a separate service I need to write. The application sifts thru quite a bit of data and stores the analysis results for graphing (real-time and offline).



This application (minus the Couch service) works on a single Ubuntu machine at this point with 32GB (no cloud) and a 3rd Gen i7 500GB SSD, 2TB secondary HDD. This is my QA / small scale performance test environment only. I am building a home-sand-cloud to deploy this app:



I have 6 computers with the following settings:




  • 4Core Intel CPU with AMT technology.

  • 8GB RAM

  • 2 Gbit NICs

  • 1x240GB SSD

  • 1x1TB HDD



I also have 2 x D-Link 8-Port EasySmart Gigabit Ethernet Switch (DGS-1100-08) as well. I was trying to follow Dimiter's blog, though he had the Network architecture in mind without the second HDD.



Now my question is about the second disks. Would ceph/swift intelligently use the second disk for journalling or actual object storage. For my storage needs (less than 2 TB), would using HDD be a good idea as I cannot afford putting 1TB SSDs in these boxes. As the first disks are 240GB SSD in the boxes, would ceph/swift use the disks appropriately?



Looking forward to seeing your responses as I don't want to go thru the headache of deploying my app to find out I need a different topology altogether.


More From » partitioning

 Answers
6

Purely from a Ceph perspective, you would want to place the journal on the primary SSD drive, taking a small share of it, and use the 1TB HDD for the OSD daemon's use.



As Andreas' answer explains, Autopilot does not do this automagically yet, so you would need to build out the hyperconverged OpenStack and Ceph cluster manually.



For performance, again from a Ceph point of view, at least 10 OSD nodes would be recommended, but I would suggest you also take a look at Red Hat's reference architecture for Ceph and MySQL -- you should be able to get a rough idea of what performance you can achieve with MySQL on Ceph, especially as you can see what hardware was used to accomplish those numbers.


[#16328] Monday, May 22, 2023, 1 Year  [reply] [flag answer]
Only authorized users can answer the question. Please sign in first, or register a free account.
errettas

Total Points: 160
Total Questions: 118
Total Answers: 91

Location: Laos
Member since Fri, Sep 11, 2020
4 Years ago
;