top of page

Creating User Models in MongoDB



MongoDB data has a flexible schema document in the same collection. They do not have to have the same set of fields or structures. Common fields in a collection's documents may include several forms of data.


The following is an overview of today's series.


  • Making A Model

  • Making CRUD paths for that model

  • Test your code with an API tester such as Postman or Insomnia.


So, let us quickly have a look at them all.


Design of a Data Model


MongoDB has two data models: embedded data models and normalized data models. Depending on the requirements, you can prepare your paper using either of the models.


1. The Embedded Data Model


This paradigm, also known as the de-normalized data model, allows you to have (embed) all of the related data in a single document.


For example, if we acquire employee information in three documents, Personal_details, Contact, and Address, we can embed all three in a single one, as shown below.

{
               _id: ,
               Emp_ID: "10025AE336"
               Personal_details:{
                               First_Name: "Anurag",
                               Last_Name: "Sharma",
                               Date_Of_Birth: "1999-07-04"
               },
               Contact: {
                               e-mail: "anurag_sharma.123@outlook.com",
                               phone: "7896534268"
               },
               Address: {
                               city: "Mumbai",
                               Area: "Andheri",
                               State: "Maharashtra"
               }
}

2. The Normalized Data Model


Using references, you can refer to subdocuments in the original document in this model. For example, in the normalized model, you could rewrite the preceding document as:

Employee.

{
               _id: <ObjectId101>,
               Emp_ID: "CIPH0101"
}

Personal Details

{
               _id: <ObjectId102>,
               empDocID: " ObjectId101",
               First_Name: "Anurag",
               Last_Name: "Sharma",
               Date_Of_Birth: "1995-07-04"
}

Contact

{
               _id: <ObjectId103>,
               empDocID: " ObjectId101",
               e-mail: "anurag_sharma.123@outlook.com",
               phone: "7896534268"
}

Address

{
               _id: <ObjectId104>,
               empDocID: " ObjectId101",
               city: "Mumbai",
               Area: "Andheri",
               State: "Maharashtra"
}


Considerations for MongoDB Schema Design


  • Create your schema according to the needs of the users.

  • If you intend to use multiple items together, combine them into a single document. Otherwise, separate them (but make sure no joins are required).

  • Duplicate the data (but only to a limited extent) because disc space is cheaper than compute time.

  • Do joins while writing, not when reading.

  • Optimizing your schema for the most common use cases.

  • In the schema, perform sophisticated aggregation.


Example


Assume a client needs a database design for his blog/website and wants to understand the differences between RDBMS and MongoDB schema design. The following requirements apply to the website.


  • Each post has its title, description, and URL.

  • Tags can be added to any post.

  • Every post includes the publisher's name and the total amount of likes.

  • Every post includes user comments and their name, message, data time, and likes.

  • There might be zero or more comments on each post.

  • The preceding requirements will require at least three tables in an RDBMS schema.


The preceding requirements will require at least three tables in an RDBMS schema.




While in MongoDB schema, the design will have one collection post and the structure shown below.

{
  
_id: POST_ID
   title: TITLE_OF_POST,
  
description: POST_DESCRIPTION,
  
by: POST_BY,
  
url: URL_OF_POST,
  
tags: [TAGA, TAGB, TAGC],
  
likes: TOTAL_LIKES,
  
comments: [  
      {
         user:'COMMENT_AS',
         message: TEXT,
         dateCreated: DATE_TIME,
         like: LIKES
      },
      {
         user:'COMMENT_AS',
         message: TEXT,
         dateCreated: DATE_TIME,
         like: LIKES
      }
  
]
}

Making A Model


How you organize your files within this program is entirely up to you. Because this type of backend is less opinionated than Ruby on Rails, you have a little more leeway with the structure.


To keep the files project organized, we recommend keeping related items in distinct directories, so we'll start by creating a new folder called models and nesting a file called user.model.js inside of it.


This is where we will specify the data model requirements we aim to map to our MongoDB database. Because we're utilizing the Mongoose library to communicate between our Express app and our database, the first thing to do inside this file is to install Mongoose and load the Schema class from the Mongoose library.


const mongoose = require('mongoose')

const Schema = mongoose.Schema


By establishing a new instance of the Schema class, we can start writing the format for our

User model.

const userSchema = new Schema()


The schema's first input is an object containing the attributes for your data model. Our user model will have three attributes: a username, an e-mail address, and an optional age. Each attribute will be described using a key: value pair, with the key representing the attribute's name and its kind value. This article lists all of the Mongoose schema types that are currently available. We will require the String and Number types.

const userSchema = new Schema({
    username: String,
    e-mail: String,
    age: Number
})

This may suffice, but we intended the age to be optional. The good news is it already is. The default value for each model attribute is false. Rather than making the age optional, we should make the username and e-mail mandatory.

const userSchema = new Schema({
    username: { type: String, required: true },
    e-mail: { type: String, required: true },
    age: Number
})

We can expand the value of any attribute into a whole object that contains further information about the attribute. Let's pretend you can only use our app if you're 18 or older.

const userSchema = new Schema({
    username: { type: String, required: true },
    e-mail: { type: String, required: true },
    age: { type: Number, min: 18 }
})

Validations are as simple as that. When you create a new user model instance, it will go through this schema to ensure that your input attributes fit the criteria.


The final step is to use Mongoose to harden this schema as a data model and then export it from this file for use in other portions of our project.


const User = mongoose.model('User', userSchema)

module.exports = User

Now we can create some CRUD routes for the User.


Developing CRUD routes


By organizational preferences, I'll now create a folder named controllers, including the file user.controller.js. This will be the core center for all User model-related action, regulating what occurs when creating, reading, updating, or deleting a model instance.


Two elements must be imported into this file. We'll need that one because the User model will be used regularly. Furthermore, we will use express to specify some routes at the local port.

const User = require('../models/user.model')
const router = require('express').Router()

Look at the router import -- notice how every time you need the router, you call a function that produces a completely new instance of the router within this file. So, if we had more than one model, the code in its controller file would be the same, but the other router object would be completely different. We'll adjust this instance of the router to make it more relevant to our User model, then export it for the rest of our Express project.


With that in mind, let's create the boilerplate for the routes we'll require and export it.

router.route('/new').post()
router.route('/').get()
router.route('/delete/:id').delete()
router.route('/update/:id').put()
module.exports = router

We may now begin customizing those routes. Remember that you must first send the path to the router's route method and then specify what sort of HTTP method will be applied to that path. In the first scenario, the method will accept one argument: a function to run when this path is reached. We'll utilize anonymous functions to keep the logic of each route contained within this single router.route call.


router.route('/new').post((req, res)=>{


})


The function is given a few arguments by default: the request (req) information and the response (res) information. The request is made by your front end (via Postman/Insomnia), and the answer is made by your back end. Each parameter comes with some built-in functionality that we shall use here. You should know the structure if you've ever made a post request.


Your front end will submit a request to your backend and a body attribute containing the information to be posted to the database. The body element should have the following information: username: "Hal," e-mail: "Anurag@cipherschools.com," age: 24. We will create a new instance of our user model using that information.

router.route('/new').post((req, res)=>{
    const newUser = new User(req.body)
})

This line will leverage our Mongoose model to generate something our MongoDB database should accept. The next step is to connect to the database and attempt to save the new instance.

router.route('/new').post((req, res)=>{
    const newUser = new User(req.body)
    newUser.save()
        .then(user => res.json(user))
})

If the database entry is successful, MongoDB will return the newly created User. There is one significant distinction between this user instance and the one we generated named newUser: the User returned from MongoDB will have an ID, which we will need to utilize in the future to perform all other types of operations on this User instance.


Once we have this confirmed User instance, we utilize the line res.json(User) to finish the cycle by populating the JSON in our response with the User.


Our code should function. However, it could be a lot better. We did not plan for the possibility that the database would reject our new User for various reasons. So, before we continue, let's add some error handling:

router.route('/new').post((req, res) => {
    const newUser = new User(req.body)

    newUser.save()
        .then(user => res.json(user))
        .catch(err => res.status(400).json("Error! " + err))
})

Now that we've written it down, there's one more step before we can put it to the test. As of today, the Express app that we built within the server.JavaScript is unaware of the model or controller files we've created. So we must return to the server and inform it of our modified code.

// inside server.js
const userRoutes = require('./controllers/user.controller')
app.use('/users', userRoutes)

First, we can nest any routes created in the user controller beneath the '/users' resource by indicating that the app should use '/users.' Our front end should send a post request to 'http://localhost:5000/users/new' to create a new user.


And now we can put it to the test!


Test your code with an API tester such as Postman or Insomnia.


We've used both of these applications and love them both. In any case, there is no endorsement. They take action.


We're now utilizing Insomnia because the name is catchy. Once everyone has logged in to your tester, please start a new request, specify that it is a POST request, paste http://localhost:5000/users/new into the resource area, and choose JSON for the body type.


Then you may include some raw JSON into the body, which should correspond to whatseeingicipate to see in the body piece that your frontend sends. So, again, username: "Hal", e-mail: "Anurag@CipherSchools.com", age: 24. Then submit the request! If everything is in order, you will receive the output.


We got our ID! Huge success.


We now have a functional MEN backend after completing this route. Of course, we still need to finish filling out the other CRUD routes, but the most difficult part of ensuring that Express, MongoDB, and Mongoose can communicate well is done. Another good time for a refreshing glass of water.


Because the rest of the routes are variants on the first one, we'll group them all together so we can look at them as a whole:

router.route('/').get((req, res) => {
    // using .find() without a parameter will match all user instances
    User.find()
        .then(allUsers => res.json(allUsers))
        .catch(err => res.status(400).json('Error! ' + err))
})
router.route('/delete/:id').delete((req, res) => {
    User.deleteOne({ _id: req.params.id })
        .then(success => res.json('Success! User deleted.'))
        .catch(err => res.status(400).json('Error! ' + err))
})
router.route('/update/:id').put((req, res) => {
    User.findByIdAndUpdate(req.params.id, req.body)
        .then(user => res.json('Success! User updated.'))
        .catch(err => res.status(400).json('Error! ' + err))
})

Take note of the following:


  • You may get the ID from the request URL by going to req.params.

  • The updated technique necessitates that the frontend request contains information for all fields other than the ID; this is the most straightforward means of updating our database.

  • You have complete control over the response given back to the front end. All you'd have to do to mask server problems for security reasons is adjust what your catch sends back.


Conclusion


Finally, building user models in MongoDB offers a versatile and scalable solution for managing user data in modern applications. The document-based data model of MongoDB enables developers to define and organize user data following their application requirements.


User data may be efficiently fetched, updated, and queried by exploiting MongoDB's powerful query language and indexing features.


One of the primary benefits of adopting MongoDB for user modeling is its ability to deal with dynamic and developing user schemas. In contrast to typical relational databases, MongoDB supports flexible schema designs, allowing for adding or altering user attributes without the need for costly schema migrations.


This adaptability is especially useful when user data is likely to change often, such as when providing new features or adapting user preferences.


Furthermore, because of its horizontal scalability and distributed architecture, MongoDB is well-suited to handle large-scale user datasets and heavy traffic loads. Distributing user data over several nodes and replica sets ensures high availability, fault tolerance, and increased performance. This scalability is critical for apps that expect significant user growth or must manage multiple user interactions simultaneously.


Furthermore, MongoDB has strong security measures to protect user data. It supports RBAC, which allows administrators to establish granular access levels and permissions for distinct user roles. Encryption preserves the confidentiality and integrity of user data at rest and in transit, reducing the risk of unauthorized access or data breaches.


Overall, developing user models in MongoDB enables developers to create modern, scalable, and secure apps that manage user data effectively. Developers can focus on creating a flawless user experience while protecting the integrity and confidentiality of user information by employing MongoDB's flexible schema design, scalability, and comprehensive security capabilities.


F.A.Qs


How does React work?


A: Hostinger says React is a JavaScript library that is used to create user interfaces. It manages UI updates efficiently by just re-rendering the sections of the DOM that have changed using a virtual DOM (Document Object Model). React employs a component-based architecture, with each component responsible for generating a specific portion of the user interface. The components can be reused and combined to form more complex user interfaces.


Explain MongoDB replication.


A: According to MongoDB manual replication is the process of synchronizing data across different MongoDB servers to provide redundancy, availability, and scalability. One server serves as the primary server in a replicated MongoDB deployment, while the others function as secondary servers. All write actions are received by the primary server, which then propagates updates to the other servers. If the primary server fails, one of the secondary servers becomes the new primary. Backup and disaster recovery, load balancing, and enhancing read performance can all be accomplished using replication.


Stay tuned to CipherSchools for more interesting tech articles.



6 views

Comments


Subscribe to Our Newsletter

Thanks for submitting!

bottom of page