title | owner |
---|---|
MySQL for Pivotal Cloud Foundry® |
MySQL |
This is documentation for the MySQL for Pivotal Cloud Foundry® (PCF) tile.
Current MySQL for PCF Details
- Version: 1.7.10
- Release Date: 2016-07-1
- Software component versions: MariaDB 10.0.21, Galera 25.3.9
- Compatible Ops Manager Version(s): 1.5.x, 1.6.x, 1.7.x
- Compatible Elastic Runtime Version(s): 1.5.x, 1.6.x, 1.7.x
- vSphere support? Yes
- AWS support? Yes
- OpenStack support? Yes
Consider the following compatibility information before upgrading MySQL for PCF.
For more information, refer to the full Product Version Matrix.
<tr>
<td>1.5.0</td>
<td>1.6.1 - 1.6.13</td>
</tr>
<tr>
<td rowspan="2">1.6.1 - 1.6.13</td>
<td>Next 1.6.X release - 1.7.10</td>
<tr>
<td>1.8.0-edge.1 - 1.8.0-edge.7</td>
</tr>
</tr>
<tr>
<td rowspan="2">1.7.0 - 1.7.9</td>
<td>Next 1.7.X release - 1.7.10</td>
<tr>
<td>1.8.0-edge.1 - 1.8.0-edge.7</td>
</tr>
</tr>
<tr>
<td>1.7.10</td>
<td>1.8.0-edge.1 - 1.8.0-edge.7</td>
</tr>
Ops Manager Version | Supported Upgrades from Imported MySQL Installation | |
---|---|---|
From | To | |
1.3.x | 1.2 | 1.3 |
1.3.2 | 1.4.0 | |
1.4.x and 1.5.x | 1.3.2 | 1.4.0 |
1.5.0 | ||
1.4.x, 1.5.x 1.6.x, 1.7.x |
1.4.0 | 1.5.0 |
Consult the Release Notes for information about changes between versions of this product.
The MySQL for PCF product delivers a fully managed, "Database as a Service" to Cloud Foundry users. When installed, the tile deploys and maintains a single or three-node cluster running a recent release of MariaDB, SQL Proxies for super-fast failover, and Service Brokers for Cloud Foundry integration. We work hard to ship the service configured with sane defaults, following the principle of least surprise for a general-use relational database service.
When installed, Developers can attach a database to their applications in as little as two commands, cf create-service
and cf bind-service
. Connection credentials are automatically provided in the standard manner. Developers can select from a menu of service plans options, which are configured by the platform operator.
Two configurations are supported:
Single | Highly Available | |
---|---|---|
**MySQL** | 1 node | 3-node cluster |
**SQL Proxy** | 1 node | 2 nodes |
**Service Broker** | 1 node | 2 nodes |
High Availability | - | Yes |
Multi-AZ Support | - | Yes |
Rolling Upgrades | - | Yes |
Automated Backups | Yes | Yes |
Customizable Plans | Yes | Yes |
Customizable VM Instances | Yes | Yes |
Plan Migrations | Yes | Yes |
Encrypted Communication | Yes ✝ | Yes ✝ |
Encrypted Data at-rest | - | - |
Long-lived Canaries | - | - |
(✝) Requires IPSEC BOSH plug-in
- Single and three-node clusters are the only supported topologies. OpsManager will allow the Operator to set the number of instances to other values, only one and three are advised. Please see the note in the Cluster Behavior document.
- Although two Proxy instances are deployed by default, there is no automation to direct clients from one to the other. See the note in the Proxy section, as well as the entry in Known Issues.
- Only the InnoDB storage engine is supported; it is the default storage engine for new tables. Attempted use of other storage engines (including MyISAM) may result in data loss.
- All databases are managed by shared, multi-tenant server processes. Although data is securely isolated between tenants using unique credentials, application performance may be impacted by noisy neighbors.
- Round-trip latency between database nodes must be less than five seconds; if the latency is higher than this, nodes will become partitioned. If more than half of cluster nodes are partitioned, the cluster will lose quorum and become unusable until manually bootstrapped.
- See also the list of Known Limitations in MariaDB cluster.
Consult the Known Issues topic for information about issues in current releases of MySQL for PCF.
- Download the product file from Pivotal Network.
- Upload the product file to your Ops Manager installation.
- Click Add next to the uploaded product description in the Available Products view to add this product to your staging area.
- Click the newly added tile to review configurable Settings.
- Click Apply Changes to deploy the service.
A single service plan enforces quotas of 100 megabytes of storage per database and 40 concurrent connections per user by default. Users of Operations Manager can configure these plan quotas. Changes to quotas will apply to all existing database instances as well as new instances. In calculating storage utilization, indexes are included along with raw tabular data.
The name of the plan is 100mb-dev by default and is automatically updated if the storage quota is modified. Thus, if the storage quota is changed to 1024 megabytes, the new default plan name will be 1024mb-dev.
Note: After changing a plan's definition, all instances of the plan must be updated. For each plan, either the operator or the user must run cf update-service SERVICE_INSTANCE -p NEW_PLAN_NAME
on the command line.
Further Note: This feature does not work properly in versions of MySQL for PCF 1.6.3 and earlier. See the entry in Known Issues for the recommended workaround.
Provisioning a service instance from this plan creates a MySQL database on a multi-tenant server, suitable for development workloads. Binding applications to the instance creates unique credentials for each application to access the database.
The service broker is deployed with a quota-enforcer process, which ensures that service instances do not exceed their allocated storage quota. When the quota is exceeded, the database users associated with the service instance will only be able to DELETE until the disk usage falls under the quota.
Configuring the Quota Enforcer frequency property controls how often the quota enforcer will poll to look for users that have met their quota.
When enabled, the database server logs additional events surrounding errors to help in identifying and correcting problems. NOTE, these logs are error output. If you have configured syslog, this option will possibly transmit unencrypted application data to your syslog server.
The proxy tier is responsible for routing connections from applications to healthy MariaDB cluster nodes, even in the event of node failure.
Applications are provided with a hostname or IP address to reach a database managed by the service. For more information, see Application Binding. By default, the MySQL service will provide bound applications with the IP of the first instance in the proxy tier. Even if additional proxy instances are deployed, client connections will not be routed through them. This means the first proxy instance is a single point of failure.
In order to eliminate the first proxy instance as a single point of failure, operators must configure a load balancer to route client connections to all proxy IPs, and configure the MySQL service to give bound applications a hostname or IP address that resolves to the load balancer.
In older versions of the product, applications were given the IP of the single MySQL server in bind credentials. When upgrading to v1.5.0, existing applications will continue to function, but, to take advantage of high availability features, they must be rebound to receive either the IP of the first proxy instance or the IP/hostname of a load balancer.
In order to configure a load balancer with the IPs of the proxy tier before v1.5.0 is deployed and prevent applications from obtaining the IP of the first proxy instance, the product enables an operator to configure the IPs that will be assigned to proxy instances. The following instructions applies to the Proxy settings page for the MySQL product in Operation Manager.
-
In the Proxy IPs field, enter a list of IP addresses that should be assigned to the proxy instances. These IP addresses must be in the CIDR range configured in the Director tile and not be currently allocated to another VM. Look at the Status pages of other tiles to see what IP addresses are in use.
-
In the Binding Credentials Hostname field, enter the hostname or IP address that should be given to bound applications for connecting to databases managed by the service. This hostname or IP address should resolve to your load balancer and be considered long-lived. When this field is modified, applications must be rebound to receive updated credentials.
Configure your load balancer to route connections for a hostname or IP to the proxy IPs. As proxy instances are not synchronized, we recommend configuring your load balancer to send all traffic to one proxy instance at a time until it fails, then failover to another proxy instance. For details, see Known Issues.
Important: To configure your load balancer with a healthcheck or monitor, use TCP against port 1936. Unauthenticated healthchecks against port 3306 will cause the service to become unavailable, and will require manual intervention to fix.
If v1.5.0 is initially deployed without a load balancer and without proxy IPs configured, a load balancer can be setup later to remove the proxy as a single point of failure. However, there are several implications to consider:
- Applications will have to be rebound to receive the hostname or IP that resolves to the load balancer. To rebind: unbind your application from the service instance, bind it again, then restage your application. For more information see Managing Service Instances with the CLI. In order to avoid unnecessary rebinding, we recommend configuring a load balancer before deploying v1.5.0.
- Instead of configuring the proxy IPs in Operations manager, use the IPs that were dynamically assigned by looking at the Status page. Configuration of proxy IPs after the product is deployed with dynamically assigned IPs is not well supported; see Known Issues.
Two lifecycle errands are run by default: the broker registrar and the smoke test. The broker registrar errand registers the broker with the Cloud Controller and makes the service plan public. The smoke test errand runs basic tests to validate that service instances can be created and deleted, and that applications pushed to Elastic Runtime can be bound and write to MySQL service instances. Both errands can be turned on or off on the Lifecycle Errands page under the Settings tab.
An operator can configure how many database instances can be provisioned (instance capacity) by configuring the amount of persistent disk allocated to the MySQL server nodes. The broker will provision a requested database if there is sufficient unreserved persistent disk. This can be managed using the Persistent Disk field for the MySQL Server job in the Resource Config setting page in Operations Manager. Not all persistent disk will be available for instance capacity; about 2-3 GB is reserved for service operation. Adding nodes to the cluster increases durability, not capacity. Multiple backend clusters, to increase capacity or for isolation, are not yet supported.
In determining how much persistent disk to make available for databases, operators should also consider that MariaDB servers require sufficient CPU, RAM, and IOPS to promptly respond to client requests for all databases.
As part of installation the product is automatically registered with Pivotal Cloud Foundry® Elastic Runtime (see Lifecycle Errands). On successful installation, the MySQL service is available to application developers in the Services Marketplace, via the web-based Developer Console or cf marketplace
. Developers can then provision instances of the service and bind them to their applications:
$ cf create-service p-mysql 100mb-dev mydb
$ cf bind-service myapp mydb
$ cf restart myapp
For more information about the use of services, see the Services Overview.
To help application developers get started with MySQL for PCF, we have provided an example application, which can be downloaded here. Instructions can be found in the included README.
Cloud Foundry users can access a dashboard for each MySQL service instances via SSO from Apps Manager. The dashboard displays current storage utilization of the database and the plan quota for storage. On the Space page in Apps Manager, users with the SpaceDeveloper role will find a Manage link next to the instance. Clicking this link will log users into the service dashboard via SSO.
The service provides a dashboard where administrators can observe health and metrics for each instance in the proxy tier. Metrics include the number of client connections routed to each backend database cluster node.
The dashboard for each proxy instance can be found at: http://proxy-<job index>.p-mysql.<system-domain>
. Job index starts at 0 so if you have two proxy instances deployed and your system-domain is example.com
, dashboards would be accessible at http://proxy-0.p-mysql.example.com
and http://proxy-1.p-mysql.example.com
.
Basic auth credentials are required to access the dashboard. These can be found in the Credentials tab of the MySQL product in Operations Manager.
For more information about SwitchBoard, read the proxy documentation
- Notes on cluster configuration
- Backing Up MySQL for PCF
Note: For information about backing up your PCF installation, refer to Backing Up and Restoring Pivotal Cloud Foundry®. - Determining MySQL cluster state
- More on Cluster Scaling, Node Failure, and Quorum
- Bootstrapping an ailing MySQL cluster
- Scaling down a MySQL cluster