Chapter 3:
Gearing up for a new feature

ManageEngine's service desk solution is introducing a new workflow automation feature shortly, which we'll use as an example for this case study. Let's find out how the teams collaborate to make this happen.

Stage 1: Ideation

Product managers get to work by reviewing customer support tickets and identifying key stakeholders and power users to understand their needs. They also study the general market trends and work with marketing analysts on competitor analysis.

ManageEngine user conference

ManageEngine User Conference, Dallas 2023

Marketing analysts also attend events like ManageEngine's User Conference and ServiceDesk Plus workshops to interact with decision-makers like CIOs, IT managers, and admins of various organizations. These events provided them with exposure to businesses of different sizes and industries. Interacting with decision-makers allows them to understand their requirements and challenges.

Based on these interactions, product managers deduce the need for a few features and multiple feature enhancements that they share with the software development team for roadmap planning. They then introduce the new feature as part of the next release. In the case of our service desk solution, many users expressed the need for workflow automation to trigger a series of events without human intervention. This would alleviate the admin's role of executing routine tasks like employee onboarding and some software requests.

The PM team identifies use cases across industries and creates a PRD that highlights the objective of the feature and how workflow automation can benefit users. They work with the UI and UX teams to create a prototype for validation.

Stage 2: Development

In the previous chapter, we talked about the collaboration platform where our teams review the feature proposal shared by the PM team. The design is modified based on their input and the PM, UI, and UX teams work together to create a visual representation of the revised design. Developers analyze the requirements and verify that the software build is achievable. They create a design document when all the teams involved are on the same page on the upcoming release.

Now, the members directly involved in the project also create a channel on our communication tool. This includes product managers, designers, developers, QA technicians, and the module and product leads. Any updates from that point forward is communicated through the channel.

ManageEngine often hires young talent, some with limited technical knowledge or background. Naturally, team leaders cannot provide them with access to confidential software information or frameworks unless they know that the recruit is ready for the job. Ever wondered how we train them?

Like most organizations, we have a security checklist. A trainee developer has to reverify this periodically to ensure the checklist is up to date. They are also assigned a mini project where they work on a feature and convert it into a web application. While the feature is not released, it does revolve around the principles of our internal security framework and coding language, and allows them to grasp the concepts while coding. It also allows leaders to determine their level of understanding and how much guidance a developer would require before they can start working independently. After solidifying their basic understanding, they are asked to adapt their application in line with Mickey, our internal framework. This entire process usually takes three to six months, after which they are assigned tasks based on their module.

While working on the module, developers review the design, code, API and security, and get approval from the content team for webpage content. Internal code review is done by mentors or project leads to make sure code structure and logic are in line with guidelines. External code changes are reviewed by other module leads. They also validate the test cases provided by the QA team.

Security review is initiated in the development stage with unit testing and vulnerability checks. For instance, the security team refers to the Open Web Application Security Project (OWASP) Top Ten, a "global community of developers, security professionals, and volunteers who work to improve the security of software through open source projects, events, and education." They also have an internal checklist for developers that allows them to cover all bases before automated review.

Examples:

A. Improper input validation

According to OWASP, improper validation of input parameters could lead to attackers injecting payloads to compromise confidential user information.

Solution:

  • Configure all possible parameters and their respective data types in the security XML configuration file.
  • Validate for all input fields on both client and server sides.
  • Refer to the regex patterns to validate all parameter values.
  • Verify the referred entity field or parameter value with deleted, trashed, inactive, or archived entity values.

B. Java deserialization vulnerability

Java application servers support deserialization of objects from data streams through cookie values. This is why it is easy to pass an exploit code via HTTP requests to a server.

Solution:

  • Avoid storing or reading java objects from files and cookies as this may pose a vulnerability when the files and cookies are accessed by hackers.
  • Hackers can pass the vulnerable deserialization object (their own source) instead of the original and take complete control of the remote system. Never store or read the Java object from file without proper security prerequisites like input validation, data sanitization, and serialization filters.

Automated review utilizes two in-house tools:

  • Security automation tool (SAT): It is used to detect security issues mentioned in the security checklist and XML misconfigurations in the feature. SAT uses text-based validation of the code to identify vulnerable patterns and generates a report. All the issues reported in this tool need to be fixed, and if anything needs to be excluded (false positive), then the developer must raise an exclude request in this tool. The report is evaluated during the final security review of this stage.
  • Security validation tool: Another in-house tool that operates in parallel with SAT, this tool also looks for vulnerabilities in code. It can be considered an additional layer of security to spot any bugs the SAT may have missed. This tool also generates a report that is evaluated in the final security review.

ManageEngine’s byte-sized insights:

Our in-house security tool is validated by the OWASP Benchmark and is a crucial part of Zoho Corp.’s security review.

To recap: here's the list of reviews conducted by our development team.

QA validation flowchart

QA starts their validation only when this entire process is complete.

Stage 3: QA testing

When the build is passed on to QA, the head of QA for the product reviews the requirements and resource availability. If a technician from the selected module is available to take up a feature enhancement, they select a date for knowledge transfer, i.e., a demo session to understand the changes. The demo session is conducted by the developer handling the incident management module. The attendees include the QA technicians for the module, API automation lead, and if required, a senior analyst. Once this is complete, the prototype (the sample URL generated in the local build) is installed in a test machine for the following:

Exploratory testing > RTM preparation > Functional test case preparation

Functional test cases are high-level business flow scenarios that cover all possible outcomes. These optimized test cases are prepared in accordance to SDLC standards and follow a pass/fail mechanism. The test cases must cover positive and negative scenarios. For instance, when the approver attempts to log in to the service desk, they require two usernames and one password. If either is incorrect, it is a negative scenario. The goal is to ensure the page doesn't break or provide a blank response. If it does, the test case fails, which means the requirement itself fails and it goes back for fixes.

QA test cases

Since we're adding a new feature, the QA team must revisit all test cases that would be impacted by the new build. The test cases are modified and reviewed by an SME and approved as bug-free. The RTM is also reviewed by another SME before the build moves on to the next test.

The QA team operates with three levels of testing:

  • User machine testing: The build is tested on its own in a test machine. All test cases and functionalities are validated.
  • Testing environment: This is a mock or sample version of the staging environment. Multiple feature branches are pushed into a single build and tested together in line with the release plan here.
  • Staging environment: Also called the pre-prod environment, it attempts to create a replica of the production environment, down to the load and architecture. Any bugs or issues that arise here will not affect customers.

User machines have three platforms: a default build, an intermediate build, and a feature build. The technician conducts user machine testing with the following tests on the default build:

  • UI functional testing
  • API testing
    1. API test case preparation
    2. API test case review
    3. API execution
  • Defect retesting
  • Migration testing
  • Sandbox testing
  • Integration testing
  • Extension testing
  • UI automation testing

This is followed by another round of performance testing. Afterwards, the build is moved to the testing environment with sample data in the feature build for test execution. Here, the technician executes the following tests:

  • Integration testing
  • Extension testing
  • Migration testing
  • Sandbox testing
  • UI automation testing
  • API testing

The product security team and QA technicians also work together to complete security and performance tests. In this case, let's say a customer requested the option to import files when executing a new task. The developers introduced an import option in the feature. Hackers can try to create a malicious file and upload it via this option, which can reach the application database and cause harm. To prevent that, the files should be scanned before upload. This is verified by the security team.

The technician compiles a report for the completed tests, along with use cases, security, and performance test details, and submits it for further testing. The product team conducts a central review, overseen by the product head, before it proceeds to the staging environment. The same list of tests is repeated and a new report is compiled. Finally, the API and UI codes are merged to the default build.

After preproduction testing, the release build is merged with the main branch. Here, the scripts are run once again and the logs team checks for a surge in logs, which usually indicates an issue. If there are any critical issues, the release is blocked. Otherwise, it moves for a final review by the senior quality analyst. The senior analyst uses a release checklist to ensure everything is on track. They also conduct a sanity test before the release and label it fit to host once the build clears everything.

From here, the build moves to the preproduction server. It allows us to test with real data without affecting the actual environment. After a couple of days of monitoring, it moves forward for release.

Stage 4: Deployment

The QA technician trains tech support on the new feature while technical and content writers work with the marketing team to initiate marketing activities for the feature release. The marketing team is usually notified of the upcoming product and a tentative implementation date during the ideation phase. This allows them to study the feature, its capabilities, and compile a marketing checklist to inform customers of its release. This checklist typically covers the following:

  • Landing page
  • Mailers for customers, prospects, and partners
  • Social media campaign
  • Webinars and how-to videos
  • Help articles
  • In-product notifications
  • Press release (for major releases)

Finally, when the feature is ready for release, the QA team conducts the following activities:

  • The build is tagged fit to host and labeled with a unique build number.
  • The build moves from the local environment to the testing environment where the release stream merges with the main branch. The technician executes the scripts once again.
  • The QA technician validates integrity, and a senior technician conducts a release review.
  • The build moves to the preproduction server and then to the live environment.

Now the workflow automation feature is live for all users. Customers can design single-touch workflows through a drag-and-drop canvas and eliminate manual intervention wherever possible.

The main role of a QA technician in post-release activities is defect triage. Reported issues are fixed, validated, and checked into the release branch. However, instead of just addressing issues and moving on, they conduct an analysis to identify common defects, the affected customer base, and why these issues occur. Let's say we're talking about production issues. Do they occur due to the environment, changing requirements, or any miscommunication between stakeholders? The defect triage process allows developers and QA technicians to come up with precautionary solutions and prevent issue repetition.

Putting together your sales enablement starter kit

Introduce your inbox to a whole new perspective

By clicking 'keep me in the loop', you agree to processing of personal data according to the Privacy Policy.