Java, NoSQL, SQL, REST API and other scary words

Most popular and simple Search and Sort algorithms

Binary Search

Given a sorted array arr[] of n elements, write a function to search a given element x in arr[].

  1. Compare x with the middle element.
  2. If x matches with middle element, we return the mid index.
  3. Else If x is greater than the mid element, then x can only lie in right half subarray after the mid element. So we recur for right half.
  4. Else (x is smaller) recur for the left half.

Search an element in a sorted and rotated array

Input  : arr[] = {5, 6, 7, 8, 9, 10, 1, 2, 3}; key = 3

Output : Found at index 8

1) Find middle point mid = (first_element + length)/2

2) If key is present at middle point, return mid. if (arr[mid] == key) return mid;

3) Else If arr[l..mid] is sorted

  1.    a) If key to be searched lies in range from arr[l]

      to arr[mid], recur for arr[l..mid].

  1.    b) Else recur for arr[mid+1..r]

4) Else (arr[mid+1..r] must be sorted)

  1.    a) If key to be searched lies in range from arr[mid+1]

      to arr[r], recur for arr[mid+1..r].

  1.    b) Else recur for arr[l..mid]

Continue reading

Standard
Java, NoSQL, SQL, REST API and other scary words

Array Algorithms

Reverse an array or string

Input: 123

Output: 321

1) Initialize start and end indexes (start = 0, end = n-1).

2) In a loop, swap arr[start] with arr[end] and change start and end as follows (start = start +1; end = end – 1)

Given an array A[] and a number x, check for pair in A[] with sum as x

1) Sort the array in non-decreasing order. sort(A, 0, arr_size-1);

2) Initialize two index variables to find the candidate elements in the sorted array.

   (a) Initialize first to the leftmost index: l = 0

  (b) Initialize second  the rightmost index:  r = ar_size-1

3) Loop while l < r.

   (a) If (A[l] + A[r] == sum)  then return 1

   (b) Else if( A[l] + A[r] <  sum )  then l++

   (c) Else r–

4) No candidates in whole array – return 0

/* The main function that implements QuickSort() arr[] –> Array to be sorted, low  –> Starting index, high  –> Ending index */

static void sort(int arr[], int low, int high)

{

    if (low < high)

    {

        /* pi is partitioning index, arr[pi] is now at right place */

        int pi = partition(arr, low, high);

        // Recursively sort elements before partition and after partition

        sort(arr, low, pi-1);  sort(arr, pi+1, high);

    }

} Continue reading

Standard
QA (eng)

QA terms questioner

What is a Test Plan?

A Test Plan is a document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks and who will do each task (roles and responsibilities) and any risks and its solutions.

What is Software Testing Life Cycle (STLC)?

The testing of software has its own life cycle.  It starts with study and analyzing the requirements.  Here is the software testing life cycle:

  1. Requirement Study
  2. Test Planning
  3. Writing Test Cases
  4. Review the Test Cases
  5. Executing the Test Cases
  6. Bug logging and tracking
  7. Close or Reopen bugs

What is meant by the Build Deployment?

When the Build so prepared by the CONFIGURATION MANAGEMENT TEAM  is sent to different Test Environments, it is called the Build Deployment.

What is Test Strategy?

A test strategy is an outline that describes the testing portion of the software development cycle. It is created to inform project managers, testers, and developers about some key issues of the testing process. This includes the testing objective, methods of testing new functions, total time and resources required for the project, and the testing environment.

The test strategy is created based on development design documents.. It is written by the Test Manager or Lead. It includes introduction, scope, resource and schedule for test activities, acceptance criteria, test environment, test tools, test priorities, test planning, executing a test pass and types of test to be performed.

The following are some of the components that the Test Strategy includes:

1 Test Levels.

2 Roles and Responsibilities.

3 Environment Requirements.

4 Testing Tools.

5 Risks and Mitigation.

6 Test Schedule.

7 Regression Test Approach.

8 Test Groups.

9 Test Priorities.

10 Test Status Collections and Reporting.

11 Test Records Maintenance.

12 Requirements traceability matrix.

13 Test Summary 

Are Test Plan and Test Strategy same type of document?

Test Plan is a document that collects and organizes test cases by functional areas and/or types of testing in a form that can be presented to the other teams and/or customer where as the Test Strategy is the documented approach to testing.

 

What is Negative Testing?

Testing the system or application using negative data is called negative testing, for example, testing password entering 6 characters where it should be 8 characters should display a message.

What is the difference between Load Testing and Performance Testing?

Load testing is the test to check the users’ response time of number of users of any one scenario of the application whereas Performance Testing is the test to check the user response time for multiple scenario of the same application.

SQL

  • Structured Query Language. Database is a collection of logically related data designed in a tabular form to meet the information needs of one or more users.

What is Change Control

  • Change Request

What is XML?

-XML stands for EXtensible Markup Language.
How do you make sure that it is quality software?

There should no critical defects (0 critical), no high defect (0 high), no medium defect (0 medium) and may be 1 low defect
How would you ensure that you have covered 100% testing?

The testing coverage is defined by exit criteria (There is exit criteria and entry criteria in the Test Strategy). For example – only 2 low defects are acceptable. Once the exit criteria meet the requirements, then the software is considered to be sufficiently tested.

What are all the basic elements in a defect report?

The basic elements in a defect report are: Defect ID, Header, Description, Defect Reported by, Date, Status, Version, Assigned to, Approved by, Module where the defect was found and so on.

What is the difference between verification and validation?

Verification: Verification is a process to ensure that the software that is made, matches the original design. It is to check whether you built the product right as per design.

Validation: Validation is a process to check whether the product design fits the client’s need. It checks whether you built the right thing. It checks whether it is designed properly.

What are the types of test cases that you write?

We write test cases for smoke testing, integration testing, functional testing, regression testing, load testing, stress testing, system testing and so on.

How to write Integration test cases?

When we do the functional testing, the integration testing is automatically done. This is my experience.

How to write Regression test cases? What are the criteria?

Regression test cases are also based on the requirement documents.

What is Test Harness?

In software testing, a test harness or automated test framework is a collection of software and test data configured to test a program unit by running it under varying conditions and monitor its behavior and outputs. It has two main parts: the test execution engine and the test script repository.

What are the different matrices that you follow?

There are various reports we normally prepare in QA:
· Test summary Report – It is a report that has list of the total test cases, list of executed test cases, remaining test case to be executed, executed date, pass/fail
· Defect Report – In this report we normally prepare a list of defect in spreadsheet e.g. defect # CQ12345 [ if you log a defect in the application called Rational ClearQuest]
· Traceability Matrix

What is parallel/audit testing?

Parallel/audit testing is testing where the user reconciles the output of the new system to the output of the current system to verify the new system performs the operations correctly.

What is software testing methodology?

One software testing methodology is the use a three step process of…
1. Creating a test strategy;
2. Creating a test plan/design; and
3. Executing tests. This methodology can be used and molded to your organization’s needs. Rob Davis believes that using this methodology is important in the development and in ongoing maintenance of his customers’ applications.

What is the general testing process?

The general testing process is the creation of a test strategy (which sometimes includes the creation of test cases), creation of a test plan/design (which usually includes test cases and test procedures) and the execution of tests.

How do you create a test strategy?

The test strategy is a formal description of how a software product will be tested. A test strategy is developed for all levels of testing, as required. The test team analyzes the requirements, writes the test strategy and reviews the plan with the project team. The test plan may include test cases, conditions, the test environment, a list of related tasks, pass/fail criteria and risk assessment. Inputs for this process:
· A description of the required hardware and software components, including test tools. This information comes from the test environment, including test tool data.
· A description of roles and responsibilities of the resources required for the test and schedule constraints. This information comes from man-hours and schedules.
· Testing methodology. This is based on known standards.
· Functional and technical requirements of the application. This information comes from requirements, change request, technical and functional design documents.
· Requirements that the system can not provide, e.g. system limitations. Outputs for this process:
· An approved and signed off test strategy document, test plan, including test cases.
· Testing issues requiring resolution. Usually this requires additional negotiation at the project management level.

How do you create a test plan/design?

Test scenarios and/or cases are prepared by reviewing functional requirements of the release and preparing logical groups of functions that can be further broken into test procedures. Test procedures define test conditions, data to be used for testing and expected results, including database updates, file outputs, report results. Generally speaking…
Test cases and scenarios are designed to represent both typical and unusual situations that may occur in the application.
Test engineers define unit test requirements and unit test cases. Test engineers also execute unit test cases.
It is the test team that, with assistance of developers and clients, develops test cases and scenarios for integration and system testing.
Test scenarios are executed through the use of test procedures or scripts.
Test procedures or scripts define a series of steps necessary to perform one or more test scenarios.
Test procedures or scripts include the specific data that will be used for testing the process or transaction.
Test procedures or scripts may cover multiple test scenarios.
Test scripts are mapped back to the requirements and traceability matrices are used to ensure each test is within scope.
Test data is captured and base lined, prior to testing. This data serves as the foundation for unit and system testing and used to exercise system functionality in a controlled environment.
Some output data is also base-lined for future comparison. Base-lined data is used to support future application maintenance via regression testing.
A pretest meeting is held to assess the readiness of the application and the environment and data to be tested. A test readiness document is created to indicate the status of the entrance criteria of the release.
Inputs for this process:
Approved Test Strategy Document.
Test tools, or automated test tools, if applicable.
Previously developed scripts, if applicable.
Test documentation problems uncovered as a result of testing.
A good understanding of software complexity and module path coverage, derived from general and detailed design documents, e.g. software design document, source code and software complexity data.
Outputs for this process:
Approved documents of test scenarios, test cases, test conditions and test data.
Reports of software design issues, given to software developers for correction.

How do you execute tests?

Execution of tests is completed by following the test documents in a methodical manner. As each test procedure is performed, an entry is recorded in a test execution log to note the execution of the procedure and whether or not the test procedure uncovered any defects. Checkpoint meetings are held throughout the execution phase. Checkpoint meetings are held daily, if required, to address and discuss testing issues, status and activities.The output from the execution of test procedures is known as test results. Test results are evaluated by test engineers to determine whether the expected results have been obtained. All discrepancies/anomalies are logged and discussed with the software team lead, hardware test lead, programmers, software engineers and documented for further investigation and resolution. Every company has a different process for logging and reporting bugs/defects uncovered during testing.A pass/fail criteria is used to determine the severity of a problem, and results are recorded in a test summary report. The severity of a problem, found during system testing, is defined in accordance to the customer’s risk assessment and recorded in their selected tracking tool.Proposed fixes are delivered to the testing environment, based on the severity of the problem. Fixes are regression tested and flawless fixes are migrated to a new baseline. Following completion of the test, members of the test team prepare a summary report. The summary report is reviewed by the Project Manager, Software QA Manager and/or Test Team Lead.
After a particular level of testing has been certified, it is the responsibility of the Configuration Manager to coordinate the migration of the release software components to the next test level, as documented in the Configuration Management Plan. The software is only migrated to the production environment after the Project Manager’s formal acceptance.

How do you divide the application into different sections to create scripts?

First of all, the application is divided in different parts when a business analyst writes the requirement document (or Use Cases or Design Document), he/she writes EACH requirement document for EACH module

What is a ‘Show Stopper’?

A show stopper is a defect or bug that stops the user for further action (testing).  It has no work around.

Based on:

Related schemes:

Standard
Java, NoSQL, SQL, REST API and other scary words

Rest Chat – Avatars, News Feed and State Machine

So, as a next step I decided to implement Avatars, News Feed and State Machine.

Avatars

That was pretty simple to add avatars – add module which will handle image upload/cropping for future upload to the DB Avatar field. I found couple modules to add, but they are not 100% working:

  1. https://github.com/andyshora/angular-image-crop Zoom is not working for chrome 57.0 and angularjs 1.4.8
  2. https://github.com/allenRoyston/ngCroppie is not working at all (I made a bug – https://github.com/allenRoyston/ngCroppie/issues/32).
  3. etc

So, finally I found ngImgCrop module for this purpose.

State Machine

I used classic state machine to handle friendship statuses (and errors), so, I draw a pretty simple scheme with transitions and after that – excel file with description of every transition (including errors handling with proper messages). You will never see this error messages unless you would like to modify existing RESR URLs to try to approve friendship of some unknown user and etc.

Cause this is a pretty simple and well known area – here is some theoretical description and implementation of the state machine:

code snapshot:

case "Restore Subscription":
 switch (relationFriend){
 case 0: errorExists = 1;errorText = "There is no ignored friend. "; break;
 case 1: errorExists = 1;errorText = "There is no ignored friend. "; break;
 case 10: errorExists = 1;errorText = "There is no ignored friend. "; break;
 case 11: errorExists = 1;errorText = "There is no ignored friend. "; break;
 case 12: errorExists = 1;errorText = "There is no ignored friend. "; break;
 case 20: errorExists = 1;errorText = "There is no ignored friend. "; break;
 case 21: errorExists = 1;errorText = "There is no ignored friend. "; break;
 case 22: errorExists = 1;errorText = "There is no ignored friend. "; break;
 case 23: errorExists = 1;errorText = "There is no ignored friend. "; break;
 case 30: newRelationFriend=21; break;
 } break;
 case "Unsubscribe":

News Feed

In fact, news feed is a somewhat like a messaging, but with the different security and CQL statement of course. Therefore, new requirements for the New Feed are:

  1. Add News Feed item
  2. Edit News Feed item
  3. Delete News Feed item
  4. Add subscription
  5. Delete subscription
  6. Hide News Feed item from everybody
  7. Show News Feed item for friends only
  8. Show News Feed item for everybody
  9. Select favorite authors

I don’t want to make a full description for each requirement and don’t want to describe User Stories because I’m a Facebook user and requirements on this level are pretty obvious and straight forward. I skipped some of these requirements during implementation and will implement them during some of the next iterations.

Let me describe my current database structure (I used draw.io for that):

Let me introduce DB changes:

1)  Introduce new object to store News Feed items

2) Add fields to store avatars

So, as a result I got this demo video and a bunch of items for the next version at my backlog.

Standard
Java, NoSQL, SQL, REST API and other scary words

Angular JS – implement Emoji

Today I’ll show you how to implement custom emoji (in fact – how to replace any text with any image) with AngularJS. In fact, that will kill you browser, cause this is a pretty heaven operation and for the chat application, which refreshes every 3 seconds, such kind of solution is a real disaster.

So, the first step is to implement custom binding for your text, you could easily do it this way:

2017-03-27_12-02-15

As you can see – I’m using emoji function here, and here is my code listing:

var emoticons = {
 ':)' : getUrlToMessages +'img_smile.png',
 ':(' : getUrlToMessages +'img_sad.png',
 ':D' : getUrlToMessages +'img_haha.png',
 ':o' : getUrlToMessages +'img_omg.png'
 }, patterns = [], metachars = /[[\]{}()*+?.\\|^$\-,&#\s]/g;

$scope.emoji = function(message){
   if (message != null){
   // build a regex pattern for each defined property
   for (var i in emoticons) {
     if (emoticons.hasOwnProperty(i)){ // escape metacharacters
     patterns.push('('+i.replace(metachars, "\\$&")+')');
     }
   }
   // build the regular expression and replace
   return message.replace(new RegExp(patterns.join('|'), 'g'), function (match) {
     var escape = typeof emoticons[match] != 'undefined' ? '<img src="' + emoticons[match] + '" />' : match;
     return $sce.trustAsHtml(escape);
   });
 }
}

As far as I don’t want to use any custom images,  I’ll just use decimal code emoji. This is not so straightforward cause Angular’s $sanitize service attempts to convert the characters to their HTML-entity equivalent.  To avoid that HTML going through $sanitize – pass your string through $sce.trustAsHtml:

$scope.emoji = function(message){
 return $sce.trustAsHtml(message);
 }

 

Standard
Java, NoSQL, SQL, REST API and other scary words

Cassandra Datastax and Java – best way to set up connection

I’ll research the best way to make a connection from my Java to Cassandra here. There are a lot of examples how to do that, but the main thing, but I’m developing some kind of chat application on my localhost (will do single insert/update statements, etc.) when all this Spark examples are perfect for analytical workflows.

The first one example is Spark 1.6:

public static JavaSparkContext getCassandraConnector(){
         SparkConf conf = new SparkConf();
         conf.setAppName("Chat");
         conf.set("spark.driver.allowMultipleContexts", "true");
         conf.set("spark.cassandra.connection.host", "127.0.0.1");
         conf.set("spark.rpc.netty.dispatcher.numThreads","2");
         conf.setMaster("local[2]");

         JavaSparkContext sc = new JavaSparkContext(conf);
         return sc;
    }

So, I also got an example for Spark 2.x where the builder will automatically reuse an existing SparkContext if one exists and create a SparkContext if it does not exist. Configuration options set in the builder are automatically propagated over to Spark and Hadoop during I/O. Continue reading

Standard
QA (eng)

QA terms and questions

What are different types of software testing?

Note: Except the Shakeout testing and Unit testing which are respectively done by the CMT and Coder/Developer, all other testing are done by the QA Engineer (Tester).

1) Unit testing: It is a test to check the code whether it is properly working or not as per the requirement.  It is done by the developers (Not testers).

2) Shakeout testing: This test is basically carried out to check the networking facility, database connectivity and the integration of modules. (It is done by the Configuration Team)

3) Smoke testing: It is an initial set of test to check whether the major functionalities are working or not and also to check the major breakdowns in the application. It is the preliminary test carried out by the SQA tester.

4) Functional testingal It is a test to check whether each and every functionality of that application is working as per the requirement. It is major test where 80% of the tests are done. In this test, the Test Cases are ‘executed’.

5) Integration testing: It is a test to check whether all the modules are combined together or not and working successfully as specified in the requirement

6) Regression testing: When a functionality is added to an application, we need to make sure that the newly added functionality does not break the application.  In order to make it sure, we perform a repeated testing which is called Regression Testing.  We also do regression testing after the developers fix the bugs.  See the video below for more understanding. (Courtesy of guru99.com).

7) System testing: Testing which is based on overall requirements specification and it covers all combined parts of a system. It is also a black box type of testing. System testing is black box testing, performed by the Test Team, and at the start of the system testing the complete system is configured in a controlled environment. System testing simulates real life scenarios that occur in a “simulated real life” test environment and test all functions of the system that are required in real life. Upon completion of integration testing, system testing is started. Before system testing, all unit and integration test results are reviewed by Software QA to ensure all problems have been resolved.

8) Load testing: It is a test to check the user’s response time of number of users using any one scenario (single business process) of the same application at the same time.

9) Stress testing: In this type of testing the application is tested against heavy load such as complex numerical values, large number of inputs, large number of queries etc. which checks for the stress/load the applications can withstand.

10) Performance testing: It is a test to check the user’s response time of number of users using multiple scenarios (multiple business process) of the same application at the same time.

11) User acceptance testing: In this type of testing, the software is handed over to the user in order to find out if the software meets the user expectations and works as it is expected to.

12) Black box testing: It is test where a tester performs testing without looking into the code. OR A testing method where the application under test is viewed as a black box and the internal behavior of the program is completely ignored. Testing occurs based upon the external specifications. Also known as behavioral testing, since only the external behavior of the program is evaluated and analyzed.

13) White box testing: It is a test where a tester looks into the code and performs the testing.

14) Alpha testing: In this type of testing, the users are invited at the development center where they use the application and the developers note every particular input or action carried out by the user. Any type of abnormal behavior of the system is noted and rectified by the developers.

15) Beta testing: In this type of testing, the software is distributed as a beta version to the users and users test the application at their sites. As the users explore the software, in case if any exception/defect occurs that is reported to the developers.

16) Acceptance testing: Is black box testing that gives the client/customer/project manager the opportunity to verify the system functionality and usability prior to the system being released to production. The acceptance test is the responsibility of the client/customer or project manager, however, it is conducted with the full support of the project team. The test team also works with the client/customer/project manager to develop the acceptance criteria.

17) Recovery/error testing: Is testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

18) Security/penetration testing: Is testing how well the system is protected against unauthorized internal or external access, or willful damage. This type of testing usually requires sophisticated testing techniques.

19) Compatibility testing: Is testing how well software performs in a particular hardware, software, operating system, or network environment.

20) Comparison testing:  Is testing that compares software weaknesses and strengths to those of competitors’ products.

21) Incremental testing: After unit testing is completed, developer performs integration testing. It is the process of verifying the interfaces and interaction between modules. While integrating, there are lots of techniques used by developers and one of them is the incremental approach. In Incremental integration testing, the developers integrate the modules one by one using stubs or drivers to uncover the defects. This approach is known as incremental integration testing. To the contrary, big bang is one other integration testing technique, where all the modules are integrated in one shot.

22) End-to-end testing: End-to-end testing is a technique used to test whether the flow of an application right from start to finish is behaving as expected. The purpose of performing end-to-end testing is to identify system dependencies and to ensure that the data integrity is maintained between various system components and systems. The entire application is tested for critical functionalities such as communicating with the other systems, interfaces, database, network, and other applications.

23) Sanity testing: Sanity testing, a software testing technique performed by the test team for some basic tests. The aim of basic test is to be conducted whenever a new build is received for testing. The terminologies such as Smoke Test or Build Verification Test or Basic Acceptance Test or Sanity Test are interchangeably used, however, each one of them is used under a slightly different scenario. Sanity test is usually unscripted, helps to identify the dependent missing functionalities. It is used to determine if the section of the application is still working after a minor change. Sanity testing can be narrow and deep. Sanity test is a narrow regression test that focuses on one or a few areas of functionality.

24) Usability testing: Usability testing is a way to see how easy to use something is by testing it with real users. Users are asked to complete tasks, typically while they are being observed by a researcher, to see where they encounter problems and experience confusion.

25) Install/uninstall testing: Installation Testing: It is performed to verify if the software has been installed with all the necessary components and the application is working as expected. This is very important as installation would be the first user interaction with the end users. Companies launch Beta Version just to ensure smoother transition to the actual product. Uninstallation Testing: Uninstallation testing is performed to verify if all the components of the application is removed during the process or NOT. All the files related to the application along with its folder structure have to be removed upon successful uninstallation. Post Uninstallation System should be able to go back to the stable state.

26) Exploratory testing, ad-hoc testing: Exploratory testing is a hands-on approach in which testers are involved in minimum planning and maximum test execution. The planning involves the creation of a test charter, a short declaration of the scope of a short (1 to 2 hour) time-boxed test effort, the objectives and possible approaches to be used.

27) Mutation testing: Mutation Testing is a type of software testing where we mutate (change) certain statements in the source code and check if the test cases are able to find the errors. It is a type of white box testing which is mainly used for unit testing. The changes in mutant program are kept extremely small, so it does not affect the overall objective of the program. The goal of Mutation Testing is to assess the quality of the test cases which should be robust enough to fail mutant code. This method is also called as Fault based testing strategy as it involves creating fault in the program

What is Negative Testing?

Testing the system or application using negative data is called negative testing, for example, testing password entering 6 characters where it should be 8 characters should display a message.

When we test an application by putting negative values (instead of actual values), then the system should not allow the other values rather than the actual value.  The system should give an message that the value is not correct.  This is called negative testing.
Another example is, if a user tries to type a letter in a numeric field, the correct behavior in this case would be to display the “Incorrect data type, please enter a number” message. The purpose of negative testing is to detect such situations and prevent applications from crashing. Also, negative testing helps you improve the quality of your application and find its weak points. (source: Jerry Ruban)

What is a Test Plan?

A Test Plan is a document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks and who will do each task (roles and responsibilities) and any risks and its solutions.

A Test Plan includes Heading, Revision History, Table of Contents, Introduction, Scope, Approach, Overview, different types of testing that will be carried out, what software and hardware will be required, issues, risks, assumptions and sign off section. Continue reading

Standard