Deploy Sharded Cluster with Keyfile Access Control
Overview
This example
involves creating a new sharded cluster that consists of a mongos, the config
server replica set, and two shard replica sets with Keyfile Access Control
Servers:
Config Servers:
Server1 -
27019
Server2 -
27019
Server3 -
27019
Shard 1:
Server4 -
27019
Server5 -
27019
Server6 -
27020
Shard 2:
Server7 -
27018
Server8 -
27018
Server9 -
27018
Mongos:
Server10 –
27017
Create the Keyfile
# openssl
rand -base64 756 > /data/sand/keyfile
# chmod 400 /data/sand/keyfile
[tommy@Server1
~]$ openssl rand -base64 756 > /data/sand/keyfile
[tommy@Server1
~]$ chmod 400 /data/sand/keyfile
[tommy@Server1
~]$ cat /data/sand/keyfile
icUb7bOy67wUIwoYdJCvOqK+Dbzyc61NEGMMm6Dcus/UrcIttN+zdKPzc1ZW9jjE
lz3sCXEP6RykLUrfELABTNzP/SrRd24pdlxi+YG3ueOT5MQj8Xu9CBu9WWBC3tWW
XQGTKWH6FFjk6QHj8sqsTSYpz2FXyQ3M+fIDQW08yw5BBnRKH20vmOHnPs7APZGY
5fa7RieAik1eBSOTjvAScA3Z2kpttsxH/xtJ7zLOljts/SbW1VFfajLmoBpUTHbX
siuVqFZyka1THLVIP5Dp0FWB28oRsV6geJ8pfn0TfjS6AmqFdF/daa4WEu4js8qJ
Yah8+2giwxbXpJ+KQgWC2lBrfrTq5eUrgP4C13w1o63ZcRZiOMIERKPAb41LtiKF
KST272HoSEMvwMgp2xyFMS2b0HUQ7xV/cCBp7Xyv2E+0UojpURd0nLN2pw1lk+NW
gDzYQ21mopz2PxR3/k+zA5WSUOQakhMFsHIptVIY5DrYWrD1QZsZssV8YH7UkZF2
LYW9t3FNFkKDDz4RKJpKZwoNomR8FgI2KDA0Ej6RXwMPhSNqAkObCaWp+FNnRYvQ
6GwVPs5ugSrVF06vdzcAUTQW4zWzpi0CWiU2T30ugDoQ5CuHyKDcRz7mMMEYLBpJ
ApXZMSiJrI3E4IvNxhyMQyo+7tukVl+lxDIIFq28ZATAnmcy9X/0GNqxJcL5Wcc4
wYZVZPGqdjVC6oY1dwOxjENo702e5BCX2O0F8PUNx3kLLpYnR50nuyoAJ7HI8r85
LNtxmlwR5da5iX56JIYZYSdei6I3QVVWUvhXz3VwahmFfNRVF8Vu2WYXz60/iMvu
u3Ice35PhpGKitqBRuVeIL+gyA0Acsx6HYc0+uf2AXV4Hjidvervnm67s3nofKmx
Eb0xRanRXRS/QmUeOdIwYkZqqDzQpc2a31mJBkfJA74behMoHv+MM9ZBy8dqTceT
/RplhMYY8bpjaIMrTwWLdagKpWZOLCUkXiGrrvEs6ibZ4qL7
[tommy@Server1
~]$
Distribute the Keyfile
Copy the key
files to all the servers in the location /data/sand/keyfile
# vi
/data/sand/keyfile
# :wq!
Config Files:
Config Servers:
net:
port: 27019
processManagement:
fork: true
sharding:
clusterRole: configsvr
replication:
replSetName: csrs
security:
keyFile: /data/sand/keyfile
storage:
dbPath: /data/sand/data
systemLog:
destination: file
logAppend: true
logRotate: rename
path: /data/sand/logs/mongod.log
Shard Servers:
net:
port: 27019
processManagement:
fork: true
sharding:
clusterRole: shardsvr
replication:
replSetName: rs_0
security:
keyFile: /data/sand/keyfile
storage:
dbPath: /data/sand/data
systemLog:
destination: file
logAppend: true
logRotate: rename
path: /data/sand/logs/mongod.log
Mongos:
net:
port: 27017
processManagement:
fork: true
sharding:
configDB: csrs/Server1:27019,Server2:27019,Server3:27019
security:
keyFile: /data/sand/keyfile
systemLog:
destination: file
logAppend: true
logRotate: rename
path: /home/tommy/logs/mongos.log
To start the server, commands:
Shard Servers
/data/sand/mongodb_3.4.10/bin/mongod
-f /data/sand/conf/mongod.conf
Config Servers
/data/sand/mongodb_3.4.10/bin/mongod
-f /data/sand/conf/mongod_config.conf
Mongos Servers
/home/tommy/mongodb_3.4.10/bin/mongos
-f /home/tommy/conf/mongos.conf
Deploy Sharded Cluster with
security
1.
Create the Config Server Replica Set
Start each
member of the config server replica set
Configuration Options
sharding:
clusterRole: configsvr
replication:
replSetName: csrs
security:
keyFile: /data/sand/keyfile
Config Servers:
Server1 -
27019
Server2 -
27019
Server3 -
27019
Start all the
servers by using the command,
Server1
/data/sand/mongodb_3.4.10/bin/mongod
-f /data/sand/conf/mongod_config.conf
[tommy@Server1
~]$ /data/sand/mongodb_3.4.10/bin/mongod -f /data/sand/conf/mongod_config.conf
about to
fork child process, waiting until server is ready for connections.
forked
process: 56110
child
process started successfully, parent exiting
[tommy@Server1
~]$
Server2
/data/sand/mongodb_3.4.10/bin/mongod
-f /data/sand/conf/mongod_config.conf
[tommy@Server2
~]$/data/sand/mongodb_3.4.10/bin/mongod
-f /data/sand/conf/mongod_config.conf
about to
fork child process, waiting until server is ready for connections.
forked
process: 63252
child
process started successfully, parent exiting
[tommy@Server2
~]$
Server3
/data/sand/mongodb_3.4.10/bin/mongod
-f /data/sand/conf/mongod_config.conf
[tommy@Server3
~]$ /data/sand/mongodb_3.4.10/bin/mongod
-f /data/sand/conf/mongod_config.conf
about to
fork child process, waiting until server is ready for connections.
forked
process: 58635
child
process started successfully, parent exiting
[tommy@Server3
~]$
Connect
to one of the config servers
Server1
mongo --port
27019
[tommy@Server1
~]$ /data/sand/mongodb_3.4.10/bin/mongo --port 27019
MongoDB
shell version v3.4.10
connecting
to: mongodb://127.0.0.1:27019/
MongoDB
server version: 3.4.10
>
Initiate and
add members to the Config Sever replica set
#
rs.initiate()
rs.initiate(
{
_id: "csrs",
configsvr: true,
members: [
{ _id : 0, host : " Server1:27019"},
{ _id : 1, host : " Server2:27019"},
{ _id : 2, host : " Server3:27019"}
]
}
)
>
rs.initiate(
... {
... _id: "csrs",
... configsvr: true,
... members: [
... { _id : 0, host : " Server1:27019"},
... { _id : 1, host : " Server2:27019"},
... { _id : 2, host : " Server3:27019"}
... ]
... }
... )
{
"ok" : 1 }
csrs:SECONDARY>
csrs:SECONDARY>
csrs:PRIMARY>
Once the
config server replica set (CSRS) is initiated and up, proceed to creating the
shard replica sets.
2.
Create the Shard Replica Sets
Start
each member of the shard replica set.
Configuration Options
sharding:
clusterRole: shardsvr
replication:
replSetName: rs_0
security:
keyFile: /data/sand/keyfile
Shard 1:
Server4 -
27019
Server5 -
27019
Server6 –
27020
Start all
the servers in Shard 1 by using the command,
Server4
/data/sand/mongodb_3.4.10/bin/mongod
-f /data/sand/conf/mongod.conf
[tommy@Server4
~]$ /data/sand/mongodb_3.4.10/bin/mongod -f /data/sand/conf/mongod.conf
about to
fork child process, waiting until server is ready for connections.
forked
process: 12833
child
process started successfully, parent exiting
[tommy@Server4
~]$
Server5
/data/sand/mongodb_3.4.10/bin/mongod
-f /data/sand/conf/mongod.conf
[tommy@Server5
~]$ /data/sand/mongodb_3.4.10/bin/mongod -f /data/sand/conf/mongod.conf
about to
fork child process, waiting until server is ready for connections.
forked
process: 24227
child
process started successfully, parent exiting
[tommy@Server5
~]$
Server6
/data/sand/mongodb_3.4.10/bin/mongod
-f /data/sand/conf/mongod.conf
[tommy@Server6
~]$ /data/sand/mongodb_3.4.10/bin/mongod -f /data/sand/conf/mongod.conf
about to
fork child process, waiting until server is ready for connections.
forked
process: 39854
child
process started successfully, parent exiting
[tommy@Server6
~]$
Connect
to a member of the shard replica set.
Server4
# mongo –port
27019
[tommy@Server4
~]$ /data/sand/mongodb_3.4.10/bin/mongo --port 27019
MongoDB
shell version v3.4.10
connecting
to: mongodb://127.0.0.1:27019/
MongoDB
server version: 3.4.10
>
Initiate and
add members to the replica set
# rs.initiate()
# rs.initiate(
{
_id: "rs_0",
members: [
{ _id : 0, host : " Server4:27019"},
{ _id : 1, host : " Server5:27019"},
{ _id : 2, host : " Server6:27020"}
]
}
)
>
rs.initiate(
... {
... _id: "rs_0",
... members: [
... { _id : 0, host : " Server4:27019"},
... { _id : 1, host : " Server5:27019"},
... { _id : 2, host : " Server6:27020"}
... ]
... }
... )
{
"ok" : 1 }
rs_0:SECONDARY>
rs_0:PRIMARY>
rs_0:PRIMARY>
Shard 2:
Server7 -
27018
Server8 -
27018
Server9 –
27018
Configuration Options
sharding:
clusterRole: shardsvr
replication:
replSetName: rs_1
security:
keyFile: /data/sand/keyfile
Start all
the servers in Shard 2 by using the command,
Server7
/data/sand/mongodb_3.4.10/bin/mongod
-f /data/sand/conf/mongod.conf
[tommy@Server7
~]$ /data/sand/mongodb_3.4.10/bin/mongod -f /data/sand/conf/mongod.conf
about to
fork child process, waiting until server is ready for connections.
forked
process: 13304
child
process started successfully, parent exiting
[tommy@Server7
~]$
Server8
/data/sand/mongodb_3.4.10/bin/mongod
-f /data/sand/conf/mongod.conf
[tommy@Server8
~]$ /data/sand/mongodb_3.4.10/bin/mongod -f /data/sand/conf/mongod.conf
about to
fork child process, waiting until server is ready for connections.
forked
process: 54249
child
process started successfully, parent exiting
[tommy@Server8
~]$
Server9
/data/sand/mongodb_3.4.10/bin/mongod
-f /data/sand/conf/mongod.conf
[tommy@Server9
~]$ /data/sand/mongodb_3.4.10/bin/mongod -f /data/sand/conf/mongod.conf
about to
fork child process, waiting until server is ready for connections.
forked
process: 47531
child
process started successfully, parent exiting
[tommy@Server9
~]$
Connect
to a member of the shard replica set.
Server7
# /data/sand/mongodb_3.4.10/bin/mongo
--port 27018
[tommy@Server7
~]$ /data/sand/mongodb_3.4.10/bin/mongo --port 27018
MongoDB
shell version v3.4.10
connecting
to: mongodb://127.0.0.1:27018/
MongoDB
server version: 3.4.10
>
Initiate and
add members to the replica set
# rs.initiate()
# rs.initiate(
{
_id: "rs_1",
members: [
{ _id : 0, host : "Server7:27018"},
{ _id : 1, host : "Server8:27018"},
{ _id : 2, host : "Server9:27018"}
]
}
)
>
rs.initiate(
... {
... _id: "rs_1",
... members: [
... { _id : 0, host : "Server7:27018"},
... { _id : 1, host : "Server8:27018"},
... { _id : 2, host : "Server9:27018"}
... ]
... }
... )
{
"ok" : 1 }
rs_1:SECONDARY>
rs_1:PRIMARY>
rs_1:PRIMARY>
3. Create the shard-local user administrator (optional)
Connect to
all shard primaries and create a admin user,
# admin =
db.getSiblingDB("admin")
# admin.createUser(
{
user: "fred",
pwd: "changeme1",
roles: [ { role: "userAdminAnyDatabase",
db: "admin" } ]
}
)
rs_1:PRIMARY>
admin = db.getSiblingDB("admin")
admin
rs_1:PRIMARY>
admin.createUser(
... {
... user: "fred",
... pwd: "changeme1",
... roles: [ { role:
"userAdminAnyDatabase", db: "admin" } ]
... }
... )
Successfully
added user: {
"user" : "fred",
"roles" : [
{
"role" :
"userAdminAnyDatabase",
"db" :
"admin"
}
]
}
rs_1:PRIMARY>
4. Authenticate as the shard-local user administrator (optional).
Authenticate
to the admin database.
# db.getSiblingDB("admin").auth("fred",
"changeme1" )
# /data/sand/mongodb_3.4.10/bin/mongo
--port 27018 -u "fred" -p "changeme1" --authenticationDatabase
"admin"
rs_1:PRIMARY>
db.getSiblingDB("admin").auth("fred", "changeme1"
)
1
rs_1:PRIMARY>
^C
bye
[tommy@Server7
~]$ /data/sand/mongodb_3.4.10/bin/mongo --port 27018 -u "fred" -p
"changeme1" --authenticationDatabase "admin"
MongoDB
shell version v3.4.10
connecting
to: mongodb://127.0.0.1:27018/
MongoDB
server version: 3.4.10
rs_1:PRIMARY>
5. Create the shard-local cluster administrator (optional).
Create a
cluser admin users if required,
# db.getSiblingDB("admin").createUser(
{
"user" : "ravi",
"pwd" : "changeme2",
roles: [ { "role" :
"clusterAdmin", "db" : "admin" } ]
}
)
rs_1:PRIMARY>
db.getSiblingDB("admin").createUser(
... {
... "user" : "ravi",
... "pwd" : "changeme2",
... roles: [ { "role" :
"clusterAdmin", "db" : "admin" } ]
... }
... )
Successfully
added user: {
"user" : "ravi",
"roles" : [
{
"role" :
"clusterAdmin",
"db" :
"admin"
}
]
}
rs_1:PRIMARY>
6.
Connect a mongos to
the Sharded Cluster
Connect a
mongos to the cluster
Configuration Options
sharding:
configDB: csrs/Server1:27019,Server2:27019,Server3:27019
security:
keyFile: /data/sand/keyfile
Mongos:
Server10 -
27017
Start the
mongos server using the command,
Server10
# /home/tommy/mongodb_3.4.10/bin/mongos
-f /home/tommy/conf/mongos.conf
[tommy@Server10
~]$ /home/tommy/mongodb_3.4.10/bin/mongos -f /home/tommy/conf/mongos.conf
about to
fork child process, waiting until server is ready for connections.
forked
process: 47709
child
process started successfully, parent exiting
[tommy@Server10
~]$
Connect to
the mongos.
# /home/tommy/mongodb_3.4.10/bin/mongo
[tommy@Server10
~]$ /home/tommy/mongodb_3.4.10/bin/mongo
MongoDB
shell version v3.4.10
connecting
to: mongodb://127.0.0.1:27017
MongoDB
server version: 3.4.10
mongos>
Create the user administrator.
Create a
cluster level user admin,
# admin =
db.getSiblingDB("admin")
# admin.createUser(
{
user: "fred",
pwd: "changeme1",
roles: [ { role:
"userAdminAnyDatabase", db: "admin" } ]
}
)
mongos>
admin = db.getSiblingDB("admin")
admin
mongos>
admin.createUser(
... {
... user: "fred",
... pwd: "changeme1",
... roles: [ { role:
"userAdminAnyDatabase", db: "admin" } ]
... }
... )
Successfully
added user: {
"user" : "fred",
"roles" : [
{
"role" :
"userAdminAnyDatabase",
"db" :
"admin"
}
]
}
mongos>
Create Administrative User for Cluster
Management
To have
replication and sharding operations, create a cluster admin user if required,
# db.getSiblingDB("admin").createUser(
{
"user" : "ravi",
"pwd" : "changeme2",
roles: [ { "role" :
"clusterAdmin", "db" : "admin" } ]
}
)
[tommy@Server10
~]$ /home/tommy/mongodb_3.4.10/bin/mongo -u "fred" -p "changeme1"
--authenticationDatabase "admin"
MongoDB
shell version v3.4.10
connecting
to: mongodb://127.0.0.1:27017
MongoDB
server version: 3.4.10
mongos>
db.getSiblingDB("admin").createUser(
... {
... "user" : "ravi",
... "pwd" : "changeme2",
... roles: [ { "role" :
"clusterAdmin", "db" : "admin" } ]
... }
... )
Successfully
added user: {
"user" : "ravi",
"roles" : [
{
"role" :
"clusterAdmin",
"db" :
"admin"
}
]
}
mongos>
Create additional users (Optional).
Create users
to allow clients to connect and access the sharded cluster, if required.
7.
Add Shards to the Cluster
Login as
cluser admin user to add shards
Use
the sh.addShard() method to add each shard to
the cluster
If the
shard is a replica set, specify the name of the replica set and specify a
member of the set.
# sh.addShard("rs_0/Server4:27019")
[tommy@Server10
~]$ /home/tommy/mongodb_3.4.10/bin/mongo -u "ravi" -p
"changeme2" --authenticationDatabase "admin"
MongoDB
shell version v3.4.10
connecting
to: mongodb://127.0.0.1:27017
MongoDB
server version: 3.4.10
mongos>
sh.addShard("rs_0/Server4:27019")
{
"shardAdded" : "rs_0", "ok" : 1 }
mongos>
# sh.addShard("rs_1/Server7:27018")
mongos>
sh.addShard("rs_1/Server7:27018")
{
"shardAdded" : "rs_1", "ok" : 1 }
mongos>
To check the
status of the Sharded Cluser
# sh.status()
mongos>
sh.status()
---
Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" :
ObjectId("5b0c78a7f436df87182714f3")
}
shards:
{
"_id" : "rs_0",
"host" : "rs_0/Server4:27019,Server5:27019,Server6:27020", "state" : 1 }
{
"_id" : "rs_1",
"host" : "rs_1/Server9:27018,Server7:27018,Server8:27018", "state" : 1 }
active mongoses:
"3.4.10" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
NaN
Failed balancer rounds in last 5
attempts: 0
Migration Results for the last 24
hours:
No recent migrations
databases:
mongos>
Done…
No comments:
Post a Comment