M103 - Basic Cluster Administration - Mongo University - Solutions
==================================================
C:\Users\prins\Documents\m103\m103-vagrant-env
============================================================================================================
Chapter 0: Introduction & Setup
================================
vagrant ssh
validate_box
6445a3f8b6f1cc5873cf1ac94194903444602708d4eb189d42b6e65ca594d80d
============================================================================================================
Chapter 1: The Mongod
======================
mongod --dbpath /data/db/ --port 27000 --bind_ip "127.0.0.1,192.168.103.100" --auth
mongo admin --host localhost:27000 --eval '
db.createUser({
user: "m103-admin",
pwd: "m103-pass",
roles: [
{role: "root", db: "admin"}
]
})
'
Chapter 1: The Mongod
Lab - Launching Mongod :
-------------------------
validate_lab_launch_mongod
5a21c6dd403b6546001e79c0
-----------------------------------------------------------------------------------------------------------
mkdir /data/logs
vi /data/mongod.conf
storage:
dbPath: "/data/db"
systemLog:
path: "/data/logs/mongod.log"
destination: "file"
net:
bindIp : "127.0.0.1,192.168.103.100"
port: 27000
security:
authorization: enabled
processManagement:
fork : true
mongod -f /data/mongod.conf
Chapter 1: The Mongod
Lab - Configuration File
--------------------------
validate_lab_configuration_file
5a2f0e41ae3c4e2f7427ee8f
---------------------------------------------------------------------------------------------------------
sudo mkdir -p /var/mongodb/db/
mkdir -p /var/mongodb/db/
sudo kill -9 2400
mongod -f /data/mongod.conf
rm -rf mongodb-27000.sock
sudo chown -R vagrant:vagrant /var/mongodb/db/
storage:
dbPath: "/var/mongodb/db/"
systemLog:
path: "/data/logs/mongod.log"
destination: "file"
net:
bindIp : "127.0.0.1,192.168.103.100"
port: 27000
security:
authorization: enabled
processManagement:
fork : true
mongo admin --host localhost:27000 --eval '
db.createUser({
user: "m103-admin",
pwd: "m103-pass",
roles: [
{role: "root", db: "admin"}
]
})
'
Chapter 1: The Mongod
Lab - Change the Default DB Path
---------------------------------
validate_lab_change_dbpath
5a2f973bcb6b357b57e6bf43
--------------------------------------------------------------
mongo admin --port 27000 -u m103-admin -p m103-pass --eval 'db.shutdownServer()'
storage:
dbPath: "/var/mongodb/db/"
systemLog:
path: "/var/mongodb/db/mongod.log"
destination: "file"
logAppend: true
net:
bindIp : "127.0.0.1,192.168.103.100"
port: 27000
security:
authorization: enabled
processManagement:
fork : true
operationProfiling:
slowOpThresholdMs: 50
Chapter 1: The Mongod
Lab - Logging to a Different Facility
---------------------------------------
validate_lab_different_logpath
5a32e5835d7a25685155aa61
--------------------------------------------------------------
mongo admin --host localhost:27000 -u m103-admin -p m103-pass --eval '
db.createUser({
user: "m103-application-user",
pwd: "m103-application-pass",
roles: [
{role: "readWrite", db: "applicationData"}
]
})
'
Chapter 1: The Mongod
Lab - Creating First Application User
--------------------------------------
validate_lab_first_application_user
5a32fdd630bff1f2fcb87acf
-------------------------------------------------------------
mongoimport --port 27000 -u m103-application-user -p m103-application-pass --authenticationDatabase admin -d applicationData -c products /dataset/products.json
vagrant@m103:/tmp$ mongoimport --port 27000 -u m103-application-user -p m103-application-pass --authenticationDatabase admin -d applicationData -c products /dataset/products.json
2019-01-20T14:59:08.225+0000 connected to: localhost:27000
2019-01-20T14:59:11.212+0000 [###.....................] applicationData.products 14.6MB/87.9MB (16.6%)
2019-01-20T14:59:14.212+0000 [#######.................] applicationData.products 29.0MB/87.9MB (32.9%)
2019-01-20T14:59:17.209+0000 [###########.............] applicationData.products 43.3MB/87.9MB (49.2%)
2019-01-20T14:59:20.209+0000 [###############.........] applicationData.products 57.3MB/87.9MB (65.2%)
2019-01-20T14:59:23.209+0000 [###################.....] applicationData.products 71.7MB/87.9MB (81.5%)
2019-01-20T14:59:26.209+0000 [#######################.] applicationData.products 86.5MB/87.9MB (98.4%)
2019-01-20T14:59:26.470+0000 [########################] applicationData.products 87.9MB/87.9MB (100.0%)
2019-01-20T14:59:26.472+0000 imported 516784 documents
Chapter 1: The Mongod
Lab - Importing a Dataset
--------------------------
validate_lab_import_dataset
5a383323ba6dbcf3cbcaec97
============================================================================================================================================
Chapter 2: Replicaiton
=======================
mongo admin --port 27000 -u m103-admin -p m103-pass --eval 'db.shutdownServer()'
mongod-repl-1.conf
vi mongod-repl-1.conf
storage:
dbPath: /var/mongodb/db/1
net:
bindIp: 192.168.103.100,localhost
port: 27001
security:
authorization: enabled
keyFile: /var/mongodb/pki/m103-keyfile
systemLog:
destination: file
path: /var/mongodb/db/mongod1.log
logAppend: true
processManagement:
fork: true
replication:
replSetName: m103-repl
cp mongod-repl-1.conf mongod-repl-2.conf
cp mongod-repl-1.conf mongod-repl-3.conf
vi mongod-repl-2.conf
vi mongod-repl-3.conf
mkdir /var/mongodb/db/{1,2,3}
sudo mkdir -p /var/mongodb/pki
sudo chown vagrant:vagrant -R /var/mongodb
openssl rand -base64 741 > /var/mongodb/pki/m103-keyfile
chmod 600 /var/mongodb/pki/m103-keyfile
mongod -f mongod-repl-1.conf
mongod -f mongod-repl-2.conf
mongod -f mongod-repl-3.conf
mongo --port 27001
rs.initiate()
use admin
db.createUser({
user: "m103-admin",
pwd: "m103-pass",
roles: [
{role: "root", db: "admin"}
]
})
exit
mongo --host "m103-repl/192.168.103.100:27001" -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin"
rs.status()
rs.add("m103:27002")
rs.add("m103:27003")
Chapter 2: Replication
Lab - Initiate a Replica Set Locally
-------------------------------------
validate_lab_initialize_local_replica_set
5a4d32f979235b109001c7bc
----------------------------------------------------------------------------------------------------------------
mongo --host "m103-repl/192.168.103.100:27001" -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin"
rs.stepDown()
rs.status()
rs.remove("192.168.103.100:27001")
rs.status()
rs.add("m103:27001")
rs.status()
Chapter 2: Replication
Lab - Remove and Re-Add a Node
--------------------------------
validate_lab_remove_readd_node
5a4fff19c0324e9feb9f60b9
-----------------------------------------------------------------------------------------------------------------
mongod -f mongod-repl-1.conf
mongod -f mongod-repl-2.conf
mongod -f mongod-repl-3.conf
mongo --host "m103-repl/m103:27001" -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin" (or)
mongo --port 27001 -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin"
rs.status()
mongo admin --port 27003 -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin" --eval 'db.shutdownServer()'
use testDatabase
db.new_data.insert({"m103": "very fun"}, { writeConcern: { w: 3, wtimeout: 1000 }})
MongoDB Enterprise m103-repl:PRIMARY> use testDatabase
switched to db testDatabase
MongoDB Enterprise m103-repl:PRIMARY> db.new_data.insert({"m103": "very fun"}, { writeConcern: { w: 3, wtimeout: 1000 }})
WriteResult({
"nInserted" : 1,
"writeConcernError" : {
"code" : 64,
"codeName" : "WriteConcernFailed",
"errInfo" : {
"wtimeout" : true
},
"errmsg" : "waiting for replication timed out"
}
})
Chapter 2: Replication
Lab - Writes with Failovers
----------------------------
Correct:
1)When a writeConcernError occurs, the document is still written to the healthy nodes.
2)The unhealthy node will have the inserted document when it is brought back online.
Wrong:
1)w: "majority" would also cause this write operation to return with an error.
2)The write operation will always return with an error, even if wtimeout is not specified.
---------------------------------------------------------------------------------------------------------------------------
mongod -f mongod-repl-1.conf
mongod -f mongod-repl-2.conf
mongod -f mongod-repl-3.conf
mongoimport --drop \
--host m103-repl/192.168.103.100:27002,192.168.103.100:27001,192.168.103.100:27003 \
-u "m103-admin" -p "m103-pass" --authenticationDatabase "admin" \
--db applicationData --collection products /dataset/products.json
vagrant@m103:~$ mongoimport --drop \
> --host m103-repl/192.168.103.100:27002,192.168.103.100:27001,192.168.103.100:27003 \
> -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin" \
> --db applicationData --collection products /dataset/products.json
2019-01-22T06:25:25.459+0000 connected to: m103-repl/192.168.103.100:27002,192.168.103.100:27001,192.168.103.100:27003
2019-01-22T06:25:25.471+0000 dropping: applicationData.products
2019-01-22T06:25:28.440+0000 [#.......................] applicationData.products 4.38MB/87.9MB (5.0%)
2019-01-22T06:25:31.440+0000 [##......................] applicationData.products 8.67MB/87.9MB (9.9%)
2019-01-22T06:25:34.439+0000 [###.....................] applicationData.products 13.0MB/87.9MB (14.8%)
2019-01-22T06:25:37.439+0000 [####....................] applicationData.products 17.2MB/87.9MB (19.5%)
2019-01-22T06:25:40.439+0000 [#####...................] applicationData.products 21.4MB/87.9MB (24.3%)
2019-01-22T06:25:43.439+0000 [######..................] applicationData.products 25.6MB/87.9MB (29.1%)
2019-01-22T06:25:46.439+0000 [########................] applicationData.products 29.6MB/87.9MB (33.7%)
2019-01-22T06:25:49.440+0000 [#########...............] applicationData.products 33.7MB/87.9MB (38.4%)
2019-01-22T06:25:52.439+0000 [##########..............] applicationData.products 38.0MB/87.9MB (43.2%)
2019-01-22T06:25:55.439+0000 [###########.............] applicationData.products 42.0MB/87.9MB (47.7%)
2019-01-22T06:25:58.439+0000 [############............] applicationData.products 46.3MB/87.9MB (52.6%)
2019-01-22T06:26:01.439+0000 [#############...........] applicationData.products 50.1MB/87.9MB (56.9%)
2019-01-22T06:26:04.440+0000 [##############..........] applicationData.products 54.0MB/87.9MB (61.4%)
2019-01-22T06:26:07.440+0000 [###############.........] applicationData.products 58.1MB/87.9MB (66.0%)
2019-01-22T06:26:10.439+0000 [################........] applicationData.products 62.2MB/87.9MB (70.8%)
2019-01-22T06:26:13.441+0000 [##################......] applicationData.products 66.6MB/87.9MB (75.7%)
2019-01-22T06:26:16.439+0000 [###################.....] applicationData.products 70.8MB/87.9MB (80.6%)
2019-01-22T06:26:19.439+0000 [####################....] applicationData.products 74.9MB/87.9MB (85.2%)
2019-01-22T06:26:22.439+0000 [#####################...] applicationData.products 79.0MB/87.9MB (89.8%)
2019-01-22T06:26:25.439+0000 [######################..] applicationData.products 83.3MB/87.9MB (94.7%)
2019-01-22T06:26:28.230+0000 [########################] applicationData.products 87.9MB/87.9MB (100.0%)
2019-01-22T06:26:28.230+0000 imported 516784 documents
vagrant@m103:~$
use applicationData
db.products.count()
MongoDB Enterprise m103-repl:PRIMARY> use applicationData
switched to db applicationData
MongoDB Enterprise m103-repl:PRIMARY> db.products.count()
516784
Chapter 2: Replication
Lab - Read Concern and Read Preferences
-----------------------------------------
Correct:
1)secondaryPreferred
2)nearest
3)secondary
4)primaryPreferred
Wrong:
primary
=============================================================================================================================================
Chapter 3: Sharding
====================
1)Bring up the config server replica set (CSRS)
csrs_1.conf
vi csrs_1.conf
sharding:
clusterRole: configsvr
replication:
replSetName: m103-csrs
security:
keyFile: /var/mongodb/pki/m103-keyfile
net:
bindIp: localhost,192.168.103.100
port: 26001
systemLog:
destination: file
path: /var/mongodb/db/csrs1/mongod.log
logAppend: true
processManagement:
fork: true
storage:
dbPath: /var/mongodb/db/csrs1
cp csrs_1.conf csrs_2.conf
cp csrs_1.conf csrs_3.conf
vi csrs_2.conf
vi csrs_3.conf
mkdir /var/mongodb/db/{csrs1,csrs2,csrs3}
mongod -f csrs_1.conf
mongod -f csrs_2.conf
mongod -f csrs_3.conf
mongo --port 26001
rs.initiate()
use admin
db.createUser({
user: "m103-admin",
pwd: "m103-pass",
roles: [
{role: "root", db: "admin"}
]
})
db.auth("m103-admin","m103-pass")
exit
mongo --host "m103-csrs/192.168.103.100:26001" -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin"
rs.status()
rs.add("m103:26002")
rs.add("m103:26003")
2. Bring up the mongos
mongos.conf
vi mongos.conf
sharding:
configDB: m103-csrs/192.168.103.100:26001,192.168.103.100:26002,192.168.103.100:26003
security:
keyFile: /var/mongodb/pki/m103-keyfile
net:
bindIp: localhost,192.168.103.100
port: 26000
systemLog:
destination: file
path: /var/mongodb/db/mongos.log
logAppend: true
processManagement:
fork: true
mongos -f mongos.conf
3. Reconfigure m103-repl
sharding:
clusterRole: shardsvr
storage:
wiredTiger:
engineConfig:
cacheSizeGB: .1
storage:
dbPath: /var/mongodb/db/1
net:
bindIp: 192.168.103.100,localhost
port: 27001
security:
authorization: enabled
keyFile: /var/mongodb/pki/m103-keyfile
systemLog:
destination: file
path: /var/mongodb/db/mongod1.log
logAppend: true
processManagement:
fork: true
replication:
replSetName: m103-repl
sharding:
clusterRole: shardsvr
storage:
wiredTiger:
engineConfig:
cacheSizeGB: .1
vi mongod-repl-1.conf
vi mongod-repl-2.conf
vi mongod-repl-3.conf
mongod -f mongod-repl-1.conf
mongod -f mongod-repl-2.conf
mongod -f mongod-repl-3.conf
4. Add m103-repl as the first shard
mongo --port 26000 -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin"
sh.addShard("m103-repl/m103:27001")
-------------------------------------
Chapter 3: Sharding
Lab - Configure a Sharded Cluster
validate_lab_first_sharded_cluster
5a57de1cb1575291ce6e560a
---------------------------------------
mongod -f csrs_1.conf
mongod -f csrs_2.conf
mongod -f csrs_3.conf
mongod -f mongod-repl-1.conf
mongod -f mongod-repl-2.conf
mongod -f mongod-repl-3.conf
mongos -f mongos.conf
mongo admin --port 26000 -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin" --eval 'db.shutdownServer()'
mongo admin --port 26001 -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin" --eval 'db.shutdownServer()'
mongo admin --port 26002 -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin" --eval 'db.shutdownServer()'
mongo admin --port 26003 -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin" --eval 'db.shutdownServer()'
mongo admin --port 27001 -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin" --eval 'db.shutdownServer()'
mongo admin --port 27002 -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin" --eval 'db.shutdownServer()'
mongo admin --port 27003 -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin" --eval 'db.shutdownServer()'
/var/mongodb/db/1
/var/mongodb/db/csrs1
--------------------------------------
--------------------------------------------------------------------------------------------------------------------------------
mkdir /var/mongodb/db/{4,5,6}
vi mongod-repl-4.conf
storage:
dbPath: /var/mongodb/db/4
wiredTiger:
engineConfig:
cacheSizeGB: .1
net:
bindIp: 192.168.103.100,localhost
port: 27004
security:
keyFile: /var/mongodb/pki/m103-keyfile
systemLog:
destination: file
path: /var/mongodb/db/4/mongod.log
logAppend: true
processManagement:
fork: true
operationProfiling:
slowOpThresholdMs: 50
replication:
replSetName: m103-repl-2
sharding:
clusterRole: shardsvr
cp mongod-repl-4.conf mongod-repl-5.conf
cp mongod-repl-4.conf mongod-repl-6.conf
vi mongod-repl-5.conf
vi mongod-repl-6.conf
mongod -f mongod-repl-4.conf
mongod -f mongod-repl-5.conf
mongod -f mongod-repl-6.conf
mongo --port 27004
rs.initiate()
use admin
db.createUser({
user: "m103-admin",
pwd: "m103-pass",
roles: [
{role: "root", db: "admin"}
]
})
db.auth("m103-admin","m103-pass")
exit
mongo --host "m103-repl/192.168.103.100:27001" -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin"
rs.status()
rs.add("m103:27005")
rs.add("m103:27006")
sh.addShard("m103-repl-2/192.168.103.100:27004")
mongoimport --drop /dataset/products.json --port 26000 -u "m103-admin" \
-p "m103-pass" --authenticationDatabase "admin" \
--db m103 --collection products
sh.enableSharding("m103")
use m103
db.products.createIndex({"sku": 1})
sh.shardCollection("m103.products", {"sku" : 1 } ) or
db.adminCommand( { shardCollection: "m103.products", key: { sku: 1 } } )
------------------------------------
Chapter 3: Sharding
Lab - Shard a Collection
Choosing the Correct Shard Key
validate_lab_shard_collection
5a621149d083824c6d889865
------------------------------------
-----------------------------------------------------------------------------------------------------------------------------------
mongod -f csrs_1.conf
mongod -f csrs_2.conf
mongod -f csrs_3.conf
mongod -f mongod-repl-1.conf
mongod -f mongod-repl-2.conf
mongod -f mongod-repl-3.conf
mongod -f mongod-repl-4.conf
mongod -f mongod-repl-5.conf
mongod -f mongod-repl-6.conf
mongos -f mongos.conf
mongo admin --port 26000 -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin"
use config
db.chunks.find().pretty()
----------------------------------------------------------
Chapter 3: Sharding
Lab - Documents in Chunks
validate_lab_document_chunks m103.products-sku_MinKey
5ac28a604c7baf1f5c25d51b
-----------------------------------------------------------
----------------------------------------------------------------------------------------------------------------------------------------
Chapter 3: Sharding
Lab - Detect Scatter Gather Queries
Correct Answers:
1)Query 1 performs a document fetch.
2)Query 1 performs an index scan before the sharding filter.
3)Query 2 performs a collection scan.
Incorrect Answers:
1)Query 2 uses the shard key.
2)Both queries perform a sharding filter before the document fetch.
======================================================================================================================================================
Final Exam
===========
Final: Question 1
------------------
Problem: Which of the following are valid command line instructions to start a mongod? You may assume that all specified files already exist.
Correct:
1)mongod --logpath /var/log/mongo/mongod.log --dbpath /data/db --fork
2)mongod -f /etc/mongod.conf
Incorrect:
1)mongod --dbpath /data/db --fork
2)mongod --log /var/log/mongo/mongod.log --authentication
----------------------------------------------------------
Final: Question 2
------------------
Problem: Given the following config file: How many directories must MongoDB have access to? Disregard the path to the configuration file itself.
storage:
dbPath: /data/db
systemLog:
destination: file
path: /var/log/mongod.log
net:
bindIp: localhost,192.168.0.100
security:
keyFile: /var/pki/keyfile
processManagement:
fork: true
Correct:
1)3
Incoeect:
1)1
2)2
3)4
------------------------------------------------------------
Final: Question 3
------------------
Problem: Given the following output from rs.status().members:
[
{
"_id": 0,
"name": "localhost:27017",
"health": 1,
"state": 1,
"stateStr": "PRIMARY",
"uptime": 548,
"optime": {
"ts": Timestamp(1521038871, 1),
"t": NumberLong("1")
},
"optimeDate": ISODate("2018-03-14T14:47:51Z"),
"electionTime": Timestamp(1521038358, 2),
"electionDate": ISODate("2018-03-14T14:39:18Z"),
"configVersion": 2,
"self": true
},
{
"_id": 1,
"name": "localhost:27018",
"health": 1,
"state": 2,
"stateStr": "SECONDARY",
"uptime": 289,
"optime": {
"ts": Timestamp(1521038871, 1),
"t": NumberLong("1")
},
"optimeDurable": {
"ts": Timestamp(1521038871, 1),
"t": NumberLong("1")
},
"optimeDate": ISODate("2018-03-14T14:47:51Z"),
"optimeDurableDate": ISODate("2018-03-14T14:47:51Z"),
"lastHeartbeat": ISODate("2018-03-14T14:47:56.558Z"),
"lastHeartbeatRecv": ISODate("2018-03-14T14:47:56.517Z"),
"pingMs": NumberLong("0"),
"syncingTo": "localhost:27022",
"configVersion": 2
},
{
"_id": 2,
"name": "localhost:27019",
"health": 1,
"state": 2,
"stateStr": "SECONDARY",
"uptime": 289,
"optime": {
"ts": Timestamp(1521038871, 1),
"t": NumberLong("1")
},
"optimeDurable": {
"ts": Timestamp(1521038871, 1),
"t": NumberLong("1")
},
"optimeDate": ISODate("2018-03-14T14:47:51Z"),
"optimeDurableDate": ISODate("2018-03-14T14:47:51Z"),
"lastHeartbeat": ISODate("2018-03-14T14:47:56.558Z"),
"lastHeartbeatRecv": ISODate("2018-03-14T14:47:56.654Z"),
"pingMs": NumberLong("0"),
"syncingTo": "localhost:27022",
"configVersion": 2
},
{
"_id": 3,
"name": "localhost:27020",
"health": 1,
"state": 2,
"stateStr": "SECONDARY",
"uptime": 289,
"optime": {
"ts": Timestamp(1521038871, 1),
"t": NumberLong("1")
},
"optimeDurable": {
"ts": Timestamp(1521038871, 1),
"t": NumberLong("1")
},
"optimeDate": ISODate("2018-03-14T14:47:51Z"),
"optimeDurableDate": ISODate("2018-03-14T14:47:51Z"),
"lastHeartbeat": ISODate("2018-03-14T14:47:56.558Z"),
"lastHeartbeatRecv": ISODate("2018-03-14T14:47:56.726Z"),
"pingMs": NumberLong("0"),
"syncingTo": "localhost:27022",
"configVersion": 2
},
{
"_id": 4,
"name": "localhost:27021",
"health": 0,
"state": 8,
"stateStr": "(not reachable/healthy)",
"uptime": 0,
"optime": {
"ts": Timestamp(0, 0),
"t": NumberLong("-1")
},
"optimeDurable": {
"ts": Timestamp(0, 0),
"t": NumberLong("-1")
},
"optimeDate": ISODate("1970-01-01T00:00:00Z"),
"optimeDurableDate": ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat": ISODate("2018-03-14T14:47:56.656Z"),
"lastHeartbeatRecv": ISODate("2018-03-14T14:47:12.668Z"),
"pingMs": NumberLong("0"),
"lastHeartbeatMessage": "Connection refused",
"configVersion": -1
},
{
"_id": 5,
"name": "localhost:27022",
"health": 1,
"state": 2,
"stateStr": "SECONDARY",
"uptime": 289,
"optime": {
"ts": Timestamp(1521038871, 1),
"t": NumberLong("1")
},
"optimeDurable": {
"ts": Timestamp(1521038871, 1),
"t": NumberLong("1")
},
"optimeDate": ISODate("2018-03-14T14:47:51Z"),
"optimeDurableDate": ISODate("2018-03-14T14:47:51Z"),
"lastHeartbeat": ISODate("2018-03-14T14:47:56.558Z"),
"lastHeartbeatRecv": ISODate("2018-03-14T14:47:55.974Z"),
"pingMs": NumberLong("0"),
"syncingTo": "localhost:27017",
"configVersion": 2
},
{
"_id": 6,
"name": "localhost:27023",
"health": 1,
"state": 2,
"stateStr": "SECONDARY",
"uptime": 289,
"optime": {
"ts": Timestamp(1521038871, 1),
"t": NumberLong("1")
},
"optimeDurable": {
"ts": Timestamp(1521038871, 1),
"t": NumberLong("1")
},
"optimeDate": ISODate("2018-03-14T14:47:51Z"),
"optimeDurableDate": ISODate("2018-03-14T14:47:51Z"),
"lastHeartbeat": ISODate("2018-03-14T14:47:56.558Z"),
"lastHeartbeatRecv": ISODate("2018-03-14T14:47:56.801Z"),
"pingMs": NumberLong("0"),
"syncingTo": "localhost:27022",
"configVersion": 2
}
]
At this moment, how many replica set members are eligible to become primary in the event of the current Primary crashing or stepping down?
Correct:
1)5
Incorrect:
1)4
2)6
3)7
------------------------------------------------------------
Final: Question 4
------------------
Problem: Given the following replica set configuration:
conf = {
"_id": "replset",
"version": 1,
"protocolVersion": 1,
"members": [
{
"_id": 0,
"host": "192.168.103.100:27017",
"priority": 2,
"votes": 1
},
{
"_id": 0,
"host": "192.168.103.100:27018",
"priority": 1,
"votes": 1
},
{
"_id": 2,
"host": "192.168.103.100:27018",
"priority": 1,
"votes": 1
}
]
}
What errors are present in the above replica set configuration?
Correct:
1)You cannot specify the same host information amoung multiple members.
2)You cannot specify two members with the same _id
Incorrect:
1)You can only specify a priority of 0 or 1, member "_id": 0 is incorrectly configured
2)you cannot have three members in a replica set.
-------------------------------------------------------------------------------------
Final: Question 5
------------------
Problem: Given the following replica set configuration:
conf = {
"_id": "replset",
"version": 1,
"protocolVersion": 1,
"members": [
{
"_id": 0,
"host": "localhost:27017",
"priority": 1,
"votes": 1
},
{
"_id": 1,
"host": "localhost:27018",
"priority": 1,
"votes": 1
},
{
"_id": 2,
"host": "localhost:27019",
"priority": 1,
"votes": 1
},
{
"_id": 3,
"host": "localhost:27020",
"priority": 0,
"votes": 0,
"slaveDelay": 3600
}
]
}
What is the most likely role served by the node with "_id": 3?
Correct:
1)It serves as a "hot" backup of data in case of accidental data loss on the other members, like a DBA accidentally dropping the database.
Incorrect:
1)It servers to service reads and writes for people in the same geographic region as the host machine
2)It servers as a reference to perform analytics on how data is changing over time
3)It serves as a hidden secondary available to use for non-critical analysis operations
-------------------------------------------------------------------------------------------------------------
Final: Question 6
------------------
Given the following shard key: { "country": 1, "_id": 1 }
Which of the following queries will be routed (targeted)? Remember that queries may be routed to more than one shard.
Correct:
1)db.customers.find({"country": "Norway", "_id": 54})
2)db.customers.find({"_id": 914, "country": "Sweden"})
3)db.customers.find({"country": { $gte: "Portugal", $lte: "Spain" }})
Incorrect:
1)db.customers.find({"_id": 455})
==============================================================================================================================
==================================================
C:\Users\prins\Documents\m103\m103-vagrant-env
============================================================================================================
Chapter 0: Introduction & Setup
================================
vagrant ssh
validate_box
6445a3f8b6f1cc5873cf1ac94194903444602708d4eb189d42b6e65ca594d80d
============================================================================================================
Chapter 1: The Mongod
======================
mongod --dbpath /data/db/ --port 27000 --bind_ip "127.0.0.1,192.168.103.100" --auth
mongo admin --host localhost:27000 --eval '
db.createUser({
user: "m103-admin",
pwd: "m103-pass",
roles: [
{role: "root", db: "admin"}
]
})
'
Chapter 1: The Mongod
Lab - Launching Mongod :
-------------------------
validate_lab_launch_mongod
5a21c6dd403b6546001e79c0
-----------------------------------------------------------------------------------------------------------
mkdir /data/logs
vi /data/mongod.conf
storage:
dbPath: "/data/db"
systemLog:
path: "/data/logs/mongod.log"
destination: "file"
net:
bindIp : "127.0.0.1,192.168.103.100"
port: 27000
security:
authorization: enabled
processManagement:
fork : true
mongod -f /data/mongod.conf
Chapter 1: The Mongod
Lab - Configuration File
--------------------------
validate_lab_configuration_file
5a2f0e41ae3c4e2f7427ee8f
---------------------------------------------------------------------------------------------------------
sudo mkdir -p /var/mongodb/db/
mkdir -p /var/mongodb/db/
sudo kill -9 2400
mongod -f /data/mongod.conf
rm -rf mongodb-27000.sock
sudo chown -R vagrant:vagrant /var/mongodb/db/
storage:
dbPath: "/var/mongodb/db/"
systemLog:
path: "/data/logs/mongod.log"
destination: "file"
net:
bindIp : "127.0.0.1,192.168.103.100"
port: 27000
security:
authorization: enabled
processManagement:
fork : true
mongo admin --host localhost:27000 --eval '
db.createUser({
user: "m103-admin",
pwd: "m103-pass",
roles: [
{role: "root", db: "admin"}
]
})
'
Chapter 1: The Mongod
Lab - Change the Default DB Path
---------------------------------
validate_lab_change_dbpath
5a2f973bcb6b357b57e6bf43
--------------------------------------------------------------
mongo admin --port 27000 -u m103-admin -p m103-pass --eval 'db.shutdownServer()'
storage:
dbPath: "/var/mongodb/db/"
systemLog:
path: "/var/mongodb/db/mongod.log"
destination: "file"
logAppend: true
net:
bindIp : "127.0.0.1,192.168.103.100"
port: 27000
security:
authorization: enabled
processManagement:
fork : true
operationProfiling:
slowOpThresholdMs: 50
Chapter 1: The Mongod
Lab - Logging to a Different Facility
---------------------------------------
validate_lab_different_logpath
5a32e5835d7a25685155aa61
--------------------------------------------------------------
mongo admin --host localhost:27000 -u m103-admin -p m103-pass --eval '
db.createUser({
user: "m103-application-user",
pwd: "m103-application-pass",
roles: [
{role: "readWrite", db: "applicationData"}
]
})
'
Chapter 1: The Mongod
Lab - Creating First Application User
--------------------------------------
validate_lab_first_application_user
5a32fdd630bff1f2fcb87acf
-------------------------------------------------------------
mongoimport --port 27000 -u m103-application-user -p m103-application-pass --authenticationDatabase admin -d applicationData -c products /dataset/products.json
vagrant@m103:/tmp$ mongoimport --port 27000 -u m103-application-user -p m103-application-pass --authenticationDatabase admin -d applicationData -c products /dataset/products.json
2019-01-20T14:59:08.225+0000 connected to: localhost:27000
2019-01-20T14:59:11.212+0000 [###.....................] applicationData.products 14.6MB/87.9MB (16.6%)
2019-01-20T14:59:14.212+0000 [#######.................] applicationData.products 29.0MB/87.9MB (32.9%)
2019-01-20T14:59:17.209+0000 [###########.............] applicationData.products 43.3MB/87.9MB (49.2%)
2019-01-20T14:59:20.209+0000 [###############.........] applicationData.products 57.3MB/87.9MB (65.2%)
2019-01-20T14:59:23.209+0000 [###################.....] applicationData.products 71.7MB/87.9MB (81.5%)
2019-01-20T14:59:26.209+0000 [#######################.] applicationData.products 86.5MB/87.9MB (98.4%)
2019-01-20T14:59:26.470+0000 [########################] applicationData.products 87.9MB/87.9MB (100.0%)
2019-01-20T14:59:26.472+0000 imported 516784 documents
Chapter 1: The Mongod
Lab - Importing a Dataset
--------------------------
validate_lab_import_dataset
5a383323ba6dbcf3cbcaec97
============================================================================================================================================
Chapter 2: Replicaiton
=======================
mongo admin --port 27000 -u m103-admin -p m103-pass --eval 'db.shutdownServer()'
mongod-repl-1.conf
vi mongod-repl-1.conf
storage:
dbPath: /var/mongodb/db/1
net:
bindIp: 192.168.103.100,localhost
port: 27001
security:
authorization: enabled
keyFile: /var/mongodb/pki/m103-keyfile
systemLog:
destination: file
path: /var/mongodb/db/mongod1.log
logAppend: true
processManagement:
fork: true
replication:
replSetName: m103-repl
cp mongod-repl-1.conf mongod-repl-2.conf
cp mongod-repl-1.conf mongod-repl-3.conf
vi mongod-repl-2.conf
vi mongod-repl-3.conf
mkdir /var/mongodb/db/{1,2,3}
sudo mkdir -p /var/mongodb/pki
sudo chown vagrant:vagrant -R /var/mongodb
openssl rand -base64 741 > /var/mongodb/pki/m103-keyfile
chmod 600 /var/mongodb/pki/m103-keyfile
mongod -f mongod-repl-1.conf
mongod -f mongod-repl-2.conf
mongod -f mongod-repl-3.conf
mongo --port 27001
rs.initiate()
use admin
db.createUser({
user: "m103-admin",
pwd: "m103-pass",
roles: [
{role: "root", db: "admin"}
]
})
exit
mongo --host "m103-repl/192.168.103.100:27001" -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin"
rs.status()
rs.add("m103:27002")
rs.add("m103:27003")
Chapter 2: Replication
Lab - Initiate a Replica Set Locally
-------------------------------------
validate_lab_initialize_local_replica_set
5a4d32f979235b109001c7bc
----------------------------------------------------------------------------------------------------------------
mongo --host "m103-repl/192.168.103.100:27001" -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin"
rs.stepDown()
rs.status()
rs.remove("192.168.103.100:27001")
rs.status()
rs.add("m103:27001")
rs.status()
Chapter 2: Replication
Lab - Remove and Re-Add a Node
--------------------------------
validate_lab_remove_readd_node
5a4fff19c0324e9feb9f60b9
-----------------------------------------------------------------------------------------------------------------
mongod -f mongod-repl-1.conf
mongod -f mongod-repl-2.conf
mongod -f mongod-repl-3.conf
mongo --host "m103-repl/m103:27001" -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin" (or)
mongo --port 27001 -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin"
rs.status()
mongo admin --port 27003 -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin" --eval 'db.shutdownServer()'
use testDatabase
db.new_data.insert({"m103": "very fun"}, { writeConcern: { w: 3, wtimeout: 1000 }})
MongoDB Enterprise m103-repl:PRIMARY> use testDatabase
switched to db testDatabase
MongoDB Enterprise m103-repl:PRIMARY> db.new_data.insert({"m103": "very fun"}, { writeConcern: { w: 3, wtimeout: 1000 }})
WriteResult({
"nInserted" : 1,
"writeConcernError" : {
"code" : 64,
"codeName" : "WriteConcernFailed",
"errInfo" : {
"wtimeout" : true
},
"errmsg" : "waiting for replication timed out"
}
})
Chapter 2: Replication
Lab - Writes with Failovers
----------------------------
Correct:
1)When a writeConcernError occurs, the document is still written to the healthy nodes.
2)The unhealthy node will have the inserted document when it is brought back online.
Wrong:
1)w: "majority" would also cause this write operation to return with an error.
2)The write operation will always return with an error, even if wtimeout is not specified.
---------------------------------------------------------------------------------------------------------------------------
mongod -f mongod-repl-1.conf
mongod -f mongod-repl-2.conf
mongod -f mongod-repl-3.conf
mongoimport --drop \
--host m103-repl/192.168.103.100:27002,192.168.103.100:27001,192.168.103.100:27003 \
-u "m103-admin" -p "m103-pass" --authenticationDatabase "admin" \
--db applicationData --collection products /dataset/products.json
vagrant@m103:~$ mongoimport --drop \
> --host m103-repl/192.168.103.100:27002,192.168.103.100:27001,192.168.103.100:27003 \
> -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin" \
> --db applicationData --collection products /dataset/products.json
2019-01-22T06:25:25.459+0000 connected to: m103-repl/192.168.103.100:27002,192.168.103.100:27001,192.168.103.100:27003
2019-01-22T06:25:25.471+0000 dropping: applicationData.products
2019-01-22T06:25:28.440+0000 [#.......................] applicationData.products 4.38MB/87.9MB (5.0%)
2019-01-22T06:25:31.440+0000 [##......................] applicationData.products 8.67MB/87.9MB (9.9%)
2019-01-22T06:25:34.439+0000 [###.....................] applicationData.products 13.0MB/87.9MB (14.8%)
2019-01-22T06:25:37.439+0000 [####....................] applicationData.products 17.2MB/87.9MB (19.5%)
2019-01-22T06:25:40.439+0000 [#####...................] applicationData.products 21.4MB/87.9MB (24.3%)
2019-01-22T06:25:43.439+0000 [######..................] applicationData.products 25.6MB/87.9MB (29.1%)
2019-01-22T06:25:46.439+0000 [########................] applicationData.products 29.6MB/87.9MB (33.7%)
2019-01-22T06:25:49.440+0000 [#########...............] applicationData.products 33.7MB/87.9MB (38.4%)
2019-01-22T06:25:52.439+0000 [##########..............] applicationData.products 38.0MB/87.9MB (43.2%)
2019-01-22T06:25:55.439+0000 [###########.............] applicationData.products 42.0MB/87.9MB (47.7%)
2019-01-22T06:25:58.439+0000 [############............] applicationData.products 46.3MB/87.9MB (52.6%)
2019-01-22T06:26:01.439+0000 [#############...........] applicationData.products 50.1MB/87.9MB (56.9%)
2019-01-22T06:26:04.440+0000 [##############..........] applicationData.products 54.0MB/87.9MB (61.4%)
2019-01-22T06:26:07.440+0000 [###############.........] applicationData.products 58.1MB/87.9MB (66.0%)
2019-01-22T06:26:10.439+0000 [################........] applicationData.products 62.2MB/87.9MB (70.8%)
2019-01-22T06:26:13.441+0000 [##################......] applicationData.products 66.6MB/87.9MB (75.7%)
2019-01-22T06:26:16.439+0000 [###################.....] applicationData.products 70.8MB/87.9MB (80.6%)
2019-01-22T06:26:19.439+0000 [####################....] applicationData.products 74.9MB/87.9MB (85.2%)
2019-01-22T06:26:22.439+0000 [#####################...] applicationData.products 79.0MB/87.9MB (89.8%)
2019-01-22T06:26:25.439+0000 [######################..] applicationData.products 83.3MB/87.9MB (94.7%)
2019-01-22T06:26:28.230+0000 [########################] applicationData.products 87.9MB/87.9MB (100.0%)
2019-01-22T06:26:28.230+0000 imported 516784 documents
vagrant@m103:~$
use applicationData
db.products.count()
MongoDB Enterprise m103-repl:PRIMARY> use applicationData
switched to db applicationData
MongoDB Enterprise m103-repl:PRIMARY> db.products.count()
516784
Chapter 2: Replication
Lab - Read Concern and Read Preferences
-----------------------------------------
Correct:
1)secondaryPreferred
2)nearest
3)secondary
4)primaryPreferred
Wrong:
primary
=============================================================================================================================================
Chapter 3: Sharding
====================
1)Bring up the config server replica set (CSRS)
csrs_1.conf
vi csrs_1.conf
sharding:
clusterRole: configsvr
replication:
replSetName: m103-csrs
security:
keyFile: /var/mongodb/pki/m103-keyfile
net:
bindIp: localhost,192.168.103.100
port: 26001
systemLog:
destination: file
path: /var/mongodb/db/csrs1/mongod.log
logAppend: true
processManagement:
fork: true
storage:
dbPath: /var/mongodb/db/csrs1
cp csrs_1.conf csrs_2.conf
cp csrs_1.conf csrs_3.conf
vi csrs_2.conf
vi csrs_3.conf
mkdir /var/mongodb/db/{csrs1,csrs2,csrs3}
mongod -f csrs_1.conf
mongod -f csrs_2.conf
mongod -f csrs_3.conf
mongo --port 26001
rs.initiate()
use admin
db.createUser({
user: "m103-admin",
pwd: "m103-pass",
roles: [
{role: "root", db: "admin"}
]
})
db.auth("m103-admin","m103-pass")
exit
mongo --host "m103-csrs/192.168.103.100:26001" -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin"
rs.status()
rs.add("m103:26002")
rs.add("m103:26003")
2. Bring up the mongos
mongos.conf
vi mongos.conf
sharding:
configDB: m103-csrs/192.168.103.100:26001,192.168.103.100:26002,192.168.103.100:26003
security:
keyFile: /var/mongodb/pki/m103-keyfile
net:
bindIp: localhost,192.168.103.100
port: 26000
systemLog:
destination: file
path: /var/mongodb/db/mongos.log
logAppend: true
processManagement:
fork: true
mongos -f mongos.conf
3. Reconfigure m103-repl
sharding:
clusterRole: shardsvr
storage:
wiredTiger:
engineConfig:
cacheSizeGB: .1
storage:
dbPath: /var/mongodb/db/1
net:
bindIp: 192.168.103.100,localhost
port: 27001
security:
authorization: enabled
keyFile: /var/mongodb/pki/m103-keyfile
systemLog:
destination: file
path: /var/mongodb/db/mongod1.log
logAppend: true
processManagement:
fork: true
replication:
replSetName: m103-repl
sharding:
clusterRole: shardsvr
storage:
wiredTiger:
engineConfig:
cacheSizeGB: .1
vi mongod-repl-1.conf
vi mongod-repl-2.conf
vi mongod-repl-3.conf
mongod -f mongod-repl-1.conf
mongod -f mongod-repl-2.conf
mongod -f mongod-repl-3.conf
4. Add m103-repl as the first shard
mongo --port 26000 -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin"
sh.addShard("m103-repl/m103:27001")
-------------------------------------
Chapter 3: Sharding
Lab - Configure a Sharded Cluster
validate_lab_first_sharded_cluster
5a57de1cb1575291ce6e560a
---------------------------------------
mongod -f csrs_1.conf
mongod -f csrs_2.conf
mongod -f csrs_3.conf
mongod -f mongod-repl-1.conf
mongod -f mongod-repl-2.conf
mongod -f mongod-repl-3.conf
mongos -f mongos.conf
mongo admin --port 26000 -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin" --eval 'db.shutdownServer()'
mongo admin --port 26001 -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin" --eval 'db.shutdownServer()'
mongo admin --port 26002 -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin" --eval 'db.shutdownServer()'
mongo admin --port 26003 -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin" --eval 'db.shutdownServer()'
mongo admin --port 27001 -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin" --eval 'db.shutdownServer()'
mongo admin --port 27002 -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin" --eval 'db.shutdownServer()'
mongo admin --port 27003 -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin" --eval 'db.shutdownServer()'
/var/mongodb/db/1
/var/mongodb/db/csrs1
--------------------------------------
--------------------------------------------------------------------------------------------------------------------------------
mkdir /var/mongodb/db/{4,5,6}
vi mongod-repl-4.conf
storage:
dbPath: /var/mongodb/db/4
wiredTiger:
engineConfig:
cacheSizeGB: .1
net:
bindIp: 192.168.103.100,localhost
port: 27004
security:
keyFile: /var/mongodb/pki/m103-keyfile
systemLog:
destination: file
path: /var/mongodb/db/4/mongod.log
logAppend: true
processManagement:
fork: true
operationProfiling:
slowOpThresholdMs: 50
replication:
replSetName: m103-repl-2
sharding:
clusterRole: shardsvr
cp mongod-repl-4.conf mongod-repl-5.conf
cp mongod-repl-4.conf mongod-repl-6.conf
vi mongod-repl-5.conf
vi mongod-repl-6.conf
mongod -f mongod-repl-4.conf
mongod -f mongod-repl-5.conf
mongod -f mongod-repl-6.conf
mongo --port 27004
rs.initiate()
use admin
db.createUser({
user: "m103-admin",
pwd: "m103-pass",
roles: [
{role: "root", db: "admin"}
]
})
db.auth("m103-admin","m103-pass")
exit
mongo --host "m103-repl/192.168.103.100:27001" -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin"
rs.status()
rs.add("m103:27005")
rs.add("m103:27006")
sh.addShard("m103-repl-2/192.168.103.100:27004")
mongoimport --drop /dataset/products.json --port 26000 -u "m103-admin" \
-p "m103-pass" --authenticationDatabase "admin" \
--db m103 --collection products
sh.enableSharding("m103")
use m103
db.products.createIndex({"sku": 1})
sh.shardCollection("m103.products", {"sku" : 1 } ) or
db.adminCommand( { shardCollection: "m103.products", key: { sku: 1 } } )
------------------------------------
Chapter 3: Sharding
Lab - Shard a Collection
Choosing the Correct Shard Key
validate_lab_shard_collection
5a621149d083824c6d889865
------------------------------------
-----------------------------------------------------------------------------------------------------------------------------------
mongod -f csrs_1.conf
mongod -f csrs_2.conf
mongod -f csrs_3.conf
mongod -f mongod-repl-1.conf
mongod -f mongod-repl-2.conf
mongod -f mongod-repl-3.conf
mongod -f mongod-repl-4.conf
mongod -f mongod-repl-5.conf
mongod -f mongod-repl-6.conf
mongos -f mongos.conf
mongo admin --port 26000 -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin"
use config
db.chunks.find().pretty()
----------------------------------------------------------
Chapter 3: Sharding
Lab - Documents in Chunks
validate_lab_document_chunks m103.products-sku_MinKey
5ac28a604c7baf1f5c25d51b
-----------------------------------------------------------
----------------------------------------------------------------------------------------------------------------------------------------
Chapter 3: Sharding
Lab - Detect Scatter Gather Queries
Correct Answers:
1)Query 1 performs a document fetch.
2)Query 1 performs an index scan before the sharding filter.
3)Query 2 performs a collection scan.
Incorrect Answers:
1)Query 2 uses the shard key.
2)Both queries perform a sharding filter before the document fetch.
======================================================================================================================================================
Final Exam
===========
Final: Question 1
------------------
Problem: Which of the following are valid command line instructions to start a mongod? You may assume that all specified files already exist.
Correct:
1)mongod --logpath /var/log/mongo/mongod.log --dbpath /data/db --fork
2)mongod -f /etc/mongod.conf
Incorrect:
1)mongod --dbpath /data/db --fork
2)mongod --log /var/log/mongo/mongod.log --authentication
----------------------------------------------------------
Final: Question 2
------------------
Problem: Given the following config file: How many directories must MongoDB have access to? Disregard the path to the configuration file itself.
storage:
dbPath: /data/db
systemLog:
destination: file
path: /var/log/mongod.log
net:
bindIp: localhost,192.168.0.100
security:
keyFile: /var/pki/keyfile
processManagement:
fork: true
Correct:
1)3
Incoeect:
1)1
2)2
3)4
------------------------------------------------------------
Final: Question 3
------------------
Problem: Given the following output from rs.status().members:
[
{
"_id": 0,
"name": "localhost:27017",
"health": 1,
"state": 1,
"stateStr": "PRIMARY",
"uptime": 548,
"optime": {
"ts": Timestamp(1521038871, 1),
"t": NumberLong("1")
},
"optimeDate": ISODate("2018-03-14T14:47:51Z"),
"electionTime": Timestamp(1521038358, 2),
"electionDate": ISODate("2018-03-14T14:39:18Z"),
"configVersion": 2,
"self": true
},
{
"_id": 1,
"name": "localhost:27018",
"health": 1,
"state": 2,
"stateStr": "SECONDARY",
"uptime": 289,
"optime": {
"ts": Timestamp(1521038871, 1),
"t": NumberLong("1")
},
"optimeDurable": {
"ts": Timestamp(1521038871, 1),
"t": NumberLong("1")
},
"optimeDate": ISODate("2018-03-14T14:47:51Z"),
"optimeDurableDate": ISODate("2018-03-14T14:47:51Z"),
"lastHeartbeat": ISODate("2018-03-14T14:47:56.558Z"),
"lastHeartbeatRecv": ISODate("2018-03-14T14:47:56.517Z"),
"pingMs": NumberLong("0"),
"syncingTo": "localhost:27022",
"configVersion": 2
},
{
"_id": 2,
"name": "localhost:27019",
"health": 1,
"state": 2,
"stateStr": "SECONDARY",
"uptime": 289,
"optime": {
"ts": Timestamp(1521038871, 1),
"t": NumberLong("1")
},
"optimeDurable": {
"ts": Timestamp(1521038871, 1),
"t": NumberLong("1")
},
"optimeDate": ISODate("2018-03-14T14:47:51Z"),
"optimeDurableDate": ISODate("2018-03-14T14:47:51Z"),
"lastHeartbeat": ISODate("2018-03-14T14:47:56.558Z"),
"lastHeartbeatRecv": ISODate("2018-03-14T14:47:56.654Z"),
"pingMs": NumberLong("0"),
"syncingTo": "localhost:27022",
"configVersion": 2
},
{
"_id": 3,
"name": "localhost:27020",
"health": 1,
"state": 2,
"stateStr": "SECONDARY",
"uptime": 289,
"optime": {
"ts": Timestamp(1521038871, 1),
"t": NumberLong("1")
},
"optimeDurable": {
"ts": Timestamp(1521038871, 1),
"t": NumberLong("1")
},
"optimeDate": ISODate("2018-03-14T14:47:51Z"),
"optimeDurableDate": ISODate("2018-03-14T14:47:51Z"),
"lastHeartbeat": ISODate("2018-03-14T14:47:56.558Z"),
"lastHeartbeatRecv": ISODate("2018-03-14T14:47:56.726Z"),
"pingMs": NumberLong("0"),
"syncingTo": "localhost:27022",
"configVersion": 2
},
{
"_id": 4,
"name": "localhost:27021",
"health": 0,
"state": 8,
"stateStr": "(not reachable/healthy)",
"uptime": 0,
"optime": {
"ts": Timestamp(0, 0),
"t": NumberLong("-1")
},
"optimeDurable": {
"ts": Timestamp(0, 0),
"t": NumberLong("-1")
},
"optimeDate": ISODate("1970-01-01T00:00:00Z"),
"optimeDurableDate": ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat": ISODate("2018-03-14T14:47:56.656Z"),
"lastHeartbeatRecv": ISODate("2018-03-14T14:47:12.668Z"),
"pingMs": NumberLong("0"),
"lastHeartbeatMessage": "Connection refused",
"configVersion": -1
},
{
"_id": 5,
"name": "localhost:27022",
"health": 1,
"state": 2,
"stateStr": "SECONDARY",
"uptime": 289,
"optime": {
"ts": Timestamp(1521038871, 1),
"t": NumberLong("1")
},
"optimeDurable": {
"ts": Timestamp(1521038871, 1),
"t": NumberLong("1")
},
"optimeDate": ISODate("2018-03-14T14:47:51Z"),
"optimeDurableDate": ISODate("2018-03-14T14:47:51Z"),
"lastHeartbeat": ISODate("2018-03-14T14:47:56.558Z"),
"lastHeartbeatRecv": ISODate("2018-03-14T14:47:55.974Z"),
"pingMs": NumberLong("0"),
"syncingTo": "localhost:27017",
"configVersion": 2
},
{
"_id": 6,
"name": "localhost:27023",
"health": 1,
"state": 2,
"stateStr": "SECONDARY",
"uptime": 289,
"optime": {
"ts": Timestamp(1521038871, 1),
"t": NumberLong("1")
},
"optimeDurable": {
"ts": Timestamp(1521038871, 1),
"t": NumberLong("1")
},
"optimeDate": ISODate("2018-03-14T14:47:51Z"),
"optimeDurableDate": ISODate("2018-03-14T14:47:51Z"),
"lastHeartbeat": ISODate("2018-03-14T14:47:56.558Z"),
"lastHeartbeatRecv": ISODate("2018-03-14T14:47:56.801Z"),
"pingMs": NumberLong("0"),
"syncingTo": "localhost:27022",
"configVersion": 2
}
]
At this moment, how many replica set members are eligible to become primary in the event of the current Primary crashing or stepping down?
Correct:
1)5
Incorrect:
1)4
2)6
3)7
------------------------------------------------------------
Final: Question 4
------------------
Problem: Given the following replica set configuration:
conf = {
"_id": "replset",
"version": 1,
"protocolVersion": 1,
"members": [
{
"_id": 0,
"host": "192.168.103.100:27017",
"priority": 2,
"votes": 1
},
{
"_id": 0,
"host": "192.168.103.100:27018",
"priority": 1,
"votes": 1
},
{
"_id": 2,
"host": "192.168.103.100:27018",
"priority": 1,
"votes": 1
}
]
}
What errors are present in the above replica set configuration?
Correct:
1)You cannot specify the same host information amoung multiple members.
2)You cannot specify two members with the same _id
Incorrect:
1)You can only specify a priority of 0 or 1, member "_id": 0 is incorrectly configured
2)you cannot have three members in a replica set.
-------------------------------------------------------------------------------------
Final: Question 5
------------------
Problem: Given the following replica set configuration:
conf = {
"_id": "replset",
"version": 1,
"protocolVersion": 1,
"members": [
{
"_id": 0,
"host": "localhost:27017",
"priority": 1,
"votes": 1
},
{
"_id": 1,
"host": "localhost:27018",
"priority": 1,
"votes": 1
},
{
"_id": 2,
"host": "localhost:27019",
"priority": 1,
"votes": 1
},
{
"_id": 3,
"host": "localhost:27020",
"priority": 0,
"votes": 0,
"slaveDelay": 3600
}
]
}
What is the most likely role served by the node with "_id": 3?
Correct:
1)It serves as a "hot" backup of data in case of accidental data loss on the other members, like a DBA accidentally dropping the database.
Incorrect:
1)It servers to service reads and writes for people in the same geographic region as the host machine
2)It servers as a reference to perform analytics on how data is changing over time
3)It serves as a hidden secondary available to use for non-critical analysis operations
-------------------------------------------------------------------------------------------------------------
Final: Question 6
------------------
Given the following shard key: { "country": 1, "_id": 1 }
Which of the following queries will be routed (targeted)? Remember that queries may be routed to more than one shard.
Correct:
1)db.customers.find({"country": "Norway", "_id": 54})
2)db.customers.find({"_id": 914, "country": "Sweden"})
3)db.customers.find({"country": { $gte: "Portugal", $lte: "Spain" }})
Incorrect:
1)db.customers.find({"_id": 455})
==============================================================================================================================