Chapter 2: ManageEngine's software development framework

In this chapter, we will give you an overview of our software development framework from ideation to release, including the roles and responsibilities.

Note: The practices mentioned here may vary across teams. Software development is a constantly evolving process and often changes depending upon current requirements and resource availability. Choose an approach that works best for your team and do not be afraid to make changes as you go.

Roles and responsibilities

Roles Responsibilities

Product management

  • Ideate new features/products based on market requirements.
  • Interact with customers to understand usability of features.
  • Create use cases based on customer input.
  • Implement each use case of a feature with the help of user interface (UI) and user experience (UX) teams.

Legal

  • Verify that the third-party products (if any) we require are legal to bundle with our products.
  • Get the necessary verification if third-party libraries are going to be used.

User interface development: Design

  • Work with the product management team to come up with the UI and UX design prototype.
  • Provide sample designs to product management team to select the best UI for the feature.
  • Deliver the finalized UI and UX design prototype to the software development (SD) team.
  • Conduct UI and UX review after implementation by the module team.

User interface development: HTML

  • Convert wireframe images/designs into HTML files.
  • Ensure the accuracy of converted HTML files with wireframe images/designs.
  • Send the converted HTML files to the module team.
  • Conduct UI and UX review after implementation by the module team.

Module

  • Work on server/client design based on input from the product management (PM) team.
  • Conduct security reviews.
  • Carry out first phase of validation testing.
  • Resolve issues reported by the quality assurance (QA) team.

Software development [Server-side]

  • Prepare data model based on the requirement from the PM team.
  • Integrate the server side third-party libraries for feature development, if required.
  • Complete all client-side workflows using client libraries (JavaScript).
  • Get server-side rest APIs from server-side coders and HTML files from the UI development team to complete the client-side workflows.
  • Integrate client-side third-party libraries if required for a feature.
  • Perform validation based on the manual test cases.
  • Run automation test cases.
  • Run the code via a security tool to identify security issues.
  • Send the changes to the QA team for further verification once the client-side development activities are completed.

QA

  • Test the functionalities in the product before an official release.
  • Test all use cases to ensure there are no bugs in the developed features.
  • Maintain a history of all test results.
  • Write test cases for new features and bug fixes.
  • Run the test cases manually and through automation.
  • Generate a report for the test cases.
  • File the failed test cases as bugs.
  • Assign a feature or issue ID for future reference upon confirmation that the filed feature request/bug is valid.
  • Maintain the ongoing/roadmap features and existing bugs preceding the release process.

Product security

  • Run the code via the automated security audit tool.
  • Conduct security audit program for all features and bug fixes.
  • Analyze usage of third-party files and their known vulnerabilities.
  • Get the details of feature/bug fixes from the SD team to discover possible vulnerabilities.
  • Provide suggestions to and guide the SD team on how to create code that is free from vulnerabilities.
  • Create security briefs for both developers and the product support team to provide instant solutions to customers.
  • Monitor recently discovered vulnerabilities.

Privacy

  • Ensure the availability of an option to encrypt columns for fields that may contain sensitive data.
  • Verify confidential data is not printed on application and access logs.
  • Ensure new features adhere to the GDPR and other compliance standards.

User education

  • Liaison with the SD team to understand the purpose of a feature and how it helps the user.
  • Work with the PM team to identify aspects of the features that need to documented, and define the extent to which each aspect must be explained.
  • Acquire supporting content such as screenshots, figures, or videos.
  • Ensure error-free content in all user-facing content, such as in-product documentation, user interface text, admin guides, user manuals, and release notes.

Configuration management

  • Maintain source code of products developed in a structured way by adhering to proprietary code standards and create executables to deliver to the customers.
  • Impart training about the version control system to the SD team.
  • Create the necessary EXE files and service pack from the source code.
  • Add servers to the backup infrastructure for data recovery with the help of the IT team.
  • Manage access rights for developers.
  • Develop internal tools for developers to access their source code.
  • Provide automation in processes for developers.

Stages of development

Stage 1: Ideation

This stage is the initial stage or the innovation phase of the SDLC process where we decide what goes into the roadmap based on customer needs and market research. What do the customers need? Does the product or feature actually address customers' requirements? This is where product management comes into play.

Our product managers play a key role in devising a vision board for the rest of the software development process, giving the other teams a sense of direction in their responsibilities. The duration of the ideation stage can be a week, a month, or much longer, depending on the requirements. The figure below provides a high-level overview of the ideation process.

Ideation workflow

Step 1: Creating the vision board

A vision board, by definition, is a series of words, images, and ideas that represent goals and drive inspiration. Product managers create this vision by interacting with customers, product experts, and analyzing the market. ManageEngine has dedicated presales, sales, and customer support teams for each product domain. The input received from leads and customers helps the product management team understand the real-world challenges our customers face when using the products and how we can help.

ManageEngine hosts user conferences with engaging presentations to provide product training for existing customers, gather competitive intelligence, and learn about the latest industry and technological trends. One representative from each of the functions, such as presales, sales, support, and product management, engages with the customers to gain actionable insights to build better features and products.

Step 2: Crafting a product requirements document (PRD)

Product managers build use cases by taking a principle-centric approach. This means, instead of trying to solve a problem right away, they start by identifying the core issue and work their way up.

A simple parallel would be a laptop not charging properly. This does not call for a user to take the entire laptop apart or replace the battery immediately. Could the problem lie elsewhere like the charging cable, adaptor, or power source? If a customer believes their laptop needs to charge quickly, the PM team works on identifying possible solutions. This includes options like using a more powerful battery, a fast-charging cable, opting for a new charging mechanism, or moving applications to the background to reduce power consumption and extend battery life. Product managers gather use cases based on their interaction with end users and recreate every scenario possible with the new solution. This information is used to create a PRD.

A PRD is the master document that our product managers create to list out all the capabilities required for a feature or a product release. It describes customer demand (gathered from support tickets and interactions at events), market opportunities, and is firmly rooted in IT or business use cases for the particular release. Each product management team creates nearly two dozen PRDs on average, of which only four to five PRDs are handpicked by the development team based on viability.

A PRD typically entails:

  • Objective: An overview of the document, including feature name, scope, teams involved, and target release.
  • Problem statement: Discuss the problem the feature aims to solve and the users affected by this problem (user personas). It also helps to mention the background information that led to this decision-making to provide context to the readers.
  • Proposed solution: How does the feature in consideration work? Discuss the best-suited approach to obtain maximized value from the feature.
  • User stories: Illustrates how users will interact with the feature through specific scenarios. These use cases are usually based on customer feedback.
  • Open issues: Address unresolved questions, assumptions, and concerns that need further clarification.
  • Requirements: List the functional and non-functional requirements and any other dependencies that may affect the feature's performance.
  • Designs: This can be in the form of wireframes, product screenshots, or any sample sketches—anything that sets the tone for UI/UX creation.
  • Success metrics: Define parameters for feature impact. Metrics vary based on the feature and often measure efficiency, revenue, and user engagement.

PRDs are modified based on a project's requirements and are revised periodically.

Step 3: Validating the solution

Once the feature or product starts taking shape, product managers collaborate with engineering managers to create a lean version of the solution to determine its feasibility. If we're creating a product that is entirely new to our ecosystem, a proof of concept (POC) demonstrates its viability. On the other hand, when we are introducing a new feature to an existing product, it should blend seamlessly with existing features and not disrupt the user experience.

The wireframe is presented to developers for a brainstorming session where they can review, modify, and validate the PRD. They also have multiple validation sessions to identify workarounds. Say the development team cannot implement an option. The PM team is responsible for suggesting a workaround based on their understanding of user behavior. Since they work with customer support and customer success teams, they are aware of which options could be suitable alternatives.

In some cases, end users are invited to test out the simplified representation of the solution to share their feedback. We need to ask a few questions before moving to the next phase. Does this actually address their concerns? Are they satisfied with the overall experience and performance? If it solves the intended purpose and checks all our boxes, it can move forward.

Stage 2: Development

Development workflow

Step 1: Feature finalization

The product management team provides their input and prioritizes features based on technical viability and business-critical functions that maximize value for our customers. Once this is passed on to the development team, they categorize features as critical and nice-to-have. This is usually done based on resource availability, module usage, and product statistics.

Features are finalized after multiple discussions between the development and product management teams to ensure balance between technical feasibility and demand. For instance, a product manager may request an automated email consisting of a ticket completion survey on the IT help desk. Users can confirm that the technician has fulfilled the ticket to the user's satisfaction through this short survey. It is up to the development team to figure out where to store data like satisfaction scores or if the data needs storing at all. If the data influences reporting capabilities, the feature has to be modified accordingly. All this is hashed out during the discussion between both teams.

Step 2: UI and UX development

At this stage, the UI and UX design and product management teams work together to come up with a rough interface. If we are introducing a new up-vote feature, where should we place the button? And what does the user see after they vote? These finer details are discussed and presented for team review. Zoho Corp. uses an in-house collaboration platform for team reviews. The contributors of these reviews include:

  • UI and UX teams
  • Product managers
  • Product development leads
  • Developers involved in the project
  • QA leads
  • Customer support leads

Members are free to share their thoughts after taking a closer look at the feature proposal, especially if they have any relevant feedback from customers. The design goes through multiple iterations based on the comments. At this stage, the content for the feature is also run past the content team for text verification.

Following the team review, PM, UI, and UX teams work together to create a visual representation of the revised design. The UI and UX development team then creates an HTML interface based on the image, which is again subjected to iterations and finalized.

The UX Design Institute frequently discusses the importance of consistency in design. It is associated with trust and increases engagement and revenue. For an enterprise, it is expected to be consistent across products in terms of design and user experience. If you're part of an SMB, now is the right time to create a standard and establish that uniformity. The UI and UX development team is responsible for ensuring consistency in our features. The up-vote button, should we choose to introduce it in other modules, should be placed in the same space. This is reviewed once again by the contributors.

Step 3: Building the code

ManageEngine’s byte-sized insights:

The first line of code for our IT help desk, ServiceDesk Plus, was written in 2004.

While the UI and UX team is working on design, the server-side technical design process is initiated parallelly. Our approach with development is to move in concurrent circles over time, rather than building tall verticals in one go. We start with a simple tool with basic functionalities and expand in phases. The phase split is determined by market requirements and/or technical specifications. For instance, if two features are co-dependent, they are released together.

Once the database design is finalized, the development team will work on the server-side implementation and APIs. They also work on the following reviews:

  • Code review: A senior developer overseeing the module or feature reviews the code and design against the PRD.
  • API review: API automation cases help the development team verify that the server-side API fixes are intact—a key criteria for release. The RESTful API input and the response format are documented and sent to the API framework team for review. Reviewers check for API structure and minimization of data, API security, type, request parameters, response parameters, and authorization.
  • High-level design review: Reviewers check for code flow, module separation, external library usage, product configuration changes, and compatibility with other dependent modules.
  • Third-party code review: Any third-party tools in use will be vetted by the legal team for license verification. They also check if the third-party product is legal to bundle in our product. For instance, if there's a vulnerability in third-party code, the security team asks developers to skip using that tool and provide them with an alternative tool or version of the tool instead.
  • Client code review: Client components and element IDs are checked to enable UI automation.
  • Security review: An initial security review is conducted in the early stages on development. After internal verification, the product security team recommends modifications if they believe there is a potential risk in the code.
  • Migration review: Reviewers must gauge the impact of the features and migrations specific to multi-portal capabilities. This includes datacenter migration and cloud migration.
  • Content review: The internationalization (i18N) content is reviewed; any content that is displayed on the interface or sent as a response from the server side needs a go-ahead from the content team.

Step 4: Test case writing and validation

A software build needs to pass multiple levels of testing before deployment. During the early stages of team review, the QA team creates test cases, i.e., specific scenarios that dictate an input and an expected outcome. These documents are influenced by the PRD and the HTML prototype. They also work with the module team to understand the technical nuances of the feature, and its impact on other features within the product. These test cases are then executed by the development team while building the code. The developer validates the features and suggests corrections (if any) and submits the case validation report.

Step 5: Unit testing

Developers test the individual components of the software by isolating a section of the code and verifying its integrity and functionality. Unit testing allows developers to work faster by modifying parts of code without waiting for the entire software build to be tested. Early bug detection also reduces project costs. In the long run, unit testing makes code maintenance easier with proper documentation.

Zoho uses an in-house tool that identifies and reports the security vulnerabilities in a build code. It promotes secure software development by helping developers identify possible vulnerabilities in their applications and providing solutions to secure the attack surfaces. The tool takes in a build executable or code, applies a predefined set of rules, and prepares a detailed security report containing the rule violations.

ManageEngine’s byte-sized insights:

The security tool used by our product teams generates over 3,300 reports every day.

Based on the severity of the violations, the build release can be blocked and the developer can revisit the code to rectify the vulnerabilities. If the result is a false positive, they can add a comment to the findings and mark it acceptable to prevent repetition. Features are built by a set of developers as branches from the source code. They work on that branch and check in to the main branch when they push the code into the repository. Each branch contains a report generated by this tool.

Stage 3: QA testing

QA testing

Step 1: Manual testing

Before the QA team gets to work, they attend a demo session with the developers to understand the fundamentals—the purpose of this feature, how it works, what customer requirements we're fulfilling with this tool, etc. Then, the QA members assigned to this module conduct an exploratory test with the prototype. The build is installed in a test machine and the QA members analyze the feature mechanism to hash out the finer details and understand the feature thoroughly.

Next, the team creates a requirement traceability matrix (RTM) document where it categorizes all the pages that need to be tested. The team also prepares test cases that cover high-level scenarios and all possible positive and negative outcomes. The test cases and RTM document are reviewed and validated by SMEs before it proceeds for further testing.

Smoke testing

Considered a surface-level test, smoke (or sanity) testing aims to verify the critical functionalities of a new software build. It is not designed to be complex and only provides a basic assessment to ensure the code changes do not have any bugs, performed the desires functions, and don't affect any existing functionalities.

Step 2: Automation testing

API automation

ManageEngine uses an in-house tool to verify that the API calls sent out are working as expected. The tool runs individual scripts to receive and evaluate the responses. If there's no response or if the response does not match the criteria established by the QA team, the test case fails. This is predominantly carried out to test the business layer of an application. Automated API tests help in early bug detection and increases test coverage, allowing teams to work quickly and keep up with their schedule.

Migration testing

Migration testing varies depending on the mode of release; on-premises and cloud versions. It also depends on the rollout. For instance, if we're launching a new product, there isn't any migration testing involved. Feature enhancements, however, would require migration testing since users may be on previous versions of the tool.

Database migration testing: Transferring data from an application to an updated database cannot be completed without a migration test. For solutions like our help desk, an update may consist of structural changes in existing schema objects like tables, which require different backend codes. Migration testing helps developers prevent data loss while ensuring functional criteria are met.

OS migration testing

When software is released, it should be compatible across multiple operating systems for all our users. For instance, a product installed in Windows and moving to Linux should be compatible with both. Its performance should also remain unaffected regardless of the version. Our IT service desk team migrates the build to each available server (Windows 2012, 2016, 2019, 2022, etc.) and validates performance.

OS migration testing is not a major factor for cloud solutions, as the build is hosted on ManageEngine's server. However, on-premises software updates require extensive database and OS migration testing before release.

Integration testing

When developers are working on a new module, they also need to ensure that it works well when combined with other existing modules, which is determined by integration testing. The goal is to go beyond unit testing and identify bugs or issues that arise when different modules interact with each other. This is especially important because modules are often created by multiple developers, who often implement their own logic. When these modules are packaged and shipped to the customer as one solution, it must offer a seamless experience.

Integration testing

Types of integration testing

UI automation

The UI is what our customers see and it goes without saying that the whole process is pointless if the user is unable to view or utilize the feature to its full potential. Automated UI testing verifies the usability and functionality of software using test scripts that mimic user actions in common scenarios.

For instance, we have an image comparison test. The QA team executes an image comparison test by taking a screenshot after every operation in the feature. Once the new build is deployed locally, they take screenshots again. Image comparison tools compare the images pixel-by-pixel or through algorithms and detect differences. If there's an unplanned difference, we'll need to fix it again.

Following this, we conduct another round of automation testing to ensure that the feature introduced does not disrupt the functionality of other features, which is a possibility since features share some common code. Zoho uses an in-house tool that performs this through scripts. When we initiate the process, the scripts simulate the final version of software to assess where it breaks after the new feature is implemented. Any impact that surfaces is rectified at this stage.

When an issue arises in the code, it goes through an internal bug tracking tool. Here, it is assigned to a developer where there are two possible outcomes:

  • The bug is validated, fixed, and passed to the QA team for verification, post which the issue is marked as Closed.
  • The bug is found to be invalid (a false positive), in which case the developer provides their comments and forwards it to QA. The QA team will verify it again and close the issue.
Issue life cycle

Issue life cycle

Once all the issues have been addressed, the build moves on to the next step in QA testing.

Step 3: Performance testing

Performance testing focuses on the stability, reliability, scalability, and responsiveness of a program under different workload conditions. It helps developers identify and eliminate bottlenecks to improve the software's operations. Some examples of these tests include:

  • Load testing: A subset of performance testing, load testing tests the application's abilities by simulating anticipated user scenarios such as normal and increased usage and increased number of concurrent requests. Load testing aims to determine the capacity an application can handle without compromising expected operations.
  • ManageEngine’s byte-sized insights:

    Zoho Corp’s CEO, Sridhar Vembu, conducts weekly townhall sessions on our collabration tool to test its functional capacity. There are thousands of attendees every week!

  • Stress testing: While load testing seeks to identify the threshold, stress testing goes above the threshold limit to evaluate the application's response under extreme loads, i.e., its breaking point. In case of events like hard drive failure or abnormal user requests, stress testing helps developers spot issues and understand how the application recovers.
  • API review: With a hundred tools under our belt, API reviews are a necessity to ensure a solid application architecture. For each tool, we have a set of accepted formats for APIs. This review will see if the development teams have stuck to that.

Step 4: Security testing

Upon completion of the initial round of QA, the build is raised to the security team for review. As the name suggests, security testing is a combination of various testing techniques to identify vulnerabilities, threats, and risks in an application and increase security.

White box testing is a technique that assesses the internal structure of a software system including the code, design, and integrations. It is also called clear box testing, transparent box testing, glass box testing, or structural testing. It is far more comprehensive than black box and gray box testing, which test the overall working of the system with no knowledge and partial knowledge of the internal workings, respectively. White box testing takes a test case input, executes it, and provides a final report based on the output. This technique is often used to identify security gaps, breaks in code, and any other design loopholes the developers may have overlooked.

Static application security testing (SAST) is a type of white box testing method that identifies security vulnerabilities. It analyzes the source code, bytecode or binary code to spot suspicious patterns. SAST tools provide feedback before the code is deployed and without executing the application, allowing developers to rectify issues early on. This helps minimize the risk of security breaches, and also reduce the cost and time involved in resolving vulnerabilities.

For instance, the launch of a new feature invites questions like who can access a URL, what parameters are considered for access, and what validation we have for these parameters. These details are scrutinized by the product security team. Upon completion of the security review and all raised issues have been rectified and verified, the QA team provides the go-ahead to initiate the release process.

Stage 4: Deployment

Step 1: Pre-deployment training

For a major release, the QA team provides adequate training on the feature capabilities to the technical support team. This allows them to be equipped to deal with challenges customers may present post-release.

Step 2: Documentation

Technical writers are tasked with the responsibility of creating specialized, feature-specific documentation. Examples include:

  • In-line help cards
  • Admin guides
  • User guides
  • Installation guides
  • How-tos
  • Best practices

For new features, writers also create feature pages. This is not a requirement for minor enhancements. These documents are reviewed by developers for technical accuracy and then by product managers for a final review before it goes live.

One factor that is often overlooked is that user education is not just the technical part of it. There is a profound difference between a technical writer and a content writer. Technical writers write from the perspective of the IT admin, using jargon that may not be comprehensible for someone who is not familiar with the tool. A content writer, however, creates whitepapers, case studies, articles etc., that give a high-level overview of the feature and its capabilities. This type of content can be consumed by anyone, regardless of domain understanding, and is published after the feature is in use.

Step 3: Release

A week or two before the official launch, we release a beta version for resellers, partners, and select customers. They are encouraged to test out the new features and share their feedback. The marketing team is notified to gear up for the launch with the required material like press relations content, feature pages, and campaigns.

At the tail end of this extensive process, the feature is tagged as ready for release or fit to host. For releases, we have a separate branch in the repository. The build is released as a milestone with a specific build number and corresponding sources tagged in the repository. It moves to the testing environment first, where the release stream is merged with the main branch. The QA team runs all the scripts and tests again. EXE and BIN files are uploaded and downloaded again to validate integrity. A senior member of the review conducts a release review.

From the test server, we move the build to a pre-production server and then go live. Other finishing touches like build changes, admin guide updates, and an in-product notification conclude the release process. For on-premises solutions, we target one release every three months, where we issue a self-service update notifying the customer about the new update and instructions on how they can download and install the new version. Cloud solutions are grouped and released when we have multiple features or enhancements across modules.

Step 4: Maintenance

ManageEngine uses an in-house tool to monitor issues reported by users. Issues are fixed and validated by the module team, QA runs the automation tests and validates the issue. Finally, the fixes are checked into the release branch.

Putting together your sales enablement starter kit

Introduce your inbox to a whole new perspective

By clicking 'keep me in the loop', you agree to processing of personal data according to the Privacy Policy.