Top Features of Power BI

As a complete Self-service BI Tool, Power BI stands out with it’s unique array of features that facilitate developing truly interactive BI Dashboards. One can build interactive dashboards using varied data sources in minutes. The dashboards are accessible in app-based Power BI service platform, where the user can view, drill-down & apply report filters & even download them. Power BI Suite comes with some unique features in the BI Analytics space

Power BI suite

Power BI suite encapsulates features of PowerPivot, Power Query and Power Maps to provide a comprehensive solution for BI Reporting & Analytics Reporting. One can easily build a pivot table summarizing the data. With Power Query, one can combine, and refine data across a wide variety of sources including traditional data sources, relational, structured and semi-structured, Web, Hadoop, Azure Marketplace, & others. Power Query also allows one to search for public data from sources such as Wikipedia. With Power Map, one can easily visualize data split by geographies. Also, Power BI can be quickly integrated with Microsoft Excel through Excel-Add Ins. So one can also publish a Dashboard Report made in Excel in just 1 click.

Importing data from multiple sources

Data can be imported from sources ranging from conventional relational databases to varied data source platforms. Data can be imported from Facebook, Sharepoint Online list, SalesForce, Google Analytics, Microsoft Azure data warehouse, Hadoop Distributed File System (HDFS) and much more

One can instantly create the website usage Dashboard by connecting the google analytics provided in Online Services of Power BI

importing-data

Visualizations

Combo charts, treemaps, fill maps, gauges and funnel charts provide customers with more ways to view their data in Power BI.

Power View enables for ad hoc, self-service data visualization, and Power Maps. It enables users to view data containing almost all geographical attributes  in a 3D-rendered Bing Maps environment.

visualization

Natural-language search technology

Natural-language search technology, helps users ask questions of their data by typing into a dialog box. The system then understands and provides answers in the form of interactive tables, charts, and graphs. It also allows one to ask questions about the data & get answers in visuals.

With Natural-language search, one can just for the search for something & get macro-level insights represented by interactive dashboards quickly in minutes. So to get counts split in various geographies one can search something like Olympic medals by Country, Unemployment Rate(US), Housing Prices by cities & many more questions.

Power BI Advantages & Features

To summarize, below are some the unique features of Power BI

  • Reports are deployable on multiple platforms like web, mobile apps, tabs. It allows you to choose device of your choice without worrying about the database platform

  • NLP is a unique feature which helps to create reports with data from online services like Bing Maps, Google Analytics

  • Power BI is capable of independently handling almost all the Data extracting & Data summarizing functions which ANSI-SQL scripting & Excel provides. So no need for SQL scripts for extracting & summarizing data

  • Visualizations facilitate drill-down & drill-up & get data & even export data as per the hierarchy.

  • Easy integration with Excel Data source including charts, pivot tables, pivot charts. Excel Dashboard can be pinned & published to Power BI service

  • Text Searches gives all possible answers about the data in Power BI service. Power BI identifies objects by the keywords typed in provides the output in form of visualization

  • Calculated columns & measures can be incorporated in the loaded Dataset independent of the Data source. Also, Queries parameterized Queries & filters can be applied on the Dataset level without any effect on the Data Source.

  • Separate Relationships interface with auto-detect relationships IntelliSense.

  • DAX functions supported. Also, the UI has a great intelligence of identifying the tables & field names as you type in.

  • Facilitates incorporating Interactive visuals. Also one can import visuals from Visuals gallery, which is a free online collection of visuals & can quickly incorporate it in the report.

  • Lastly, pricing is per user/month is like pay-as-you-go, which is very aggressive. For more details on the pricing please click Power BI pricing.

  • Power BI users also have access to online community where they can ask questions & raise tickets & the official website has blog & great resources for self-learning

Contact us at feedback@bistasolutions.com. For a free evaluation of how big data can be leveraged to provide you a competitive advantage.

Supply Chain Optimization using Big Data

As we witness a pivotal change in the way big data is revolutionizing and redefining all aspects of our lives, it becomes increasingly necessary for professionals from all domains to think radically on its application in their industries. The inventions around the Hadoop ecosystem has enabled ground-breaking technologies from driverless cars to intelligent assistants like Siri. It is not surprising, that the crucially important field of supply chain optimization, is ripe for a major breakthrough in how it has been approached until today.

side1

Traditionally, procurement has been planned around either predefined reorder points triggering a procurement request, or around fixed forecasting period using safety stock and average sales forecast. The problem with this approach was that there was no feedback loop to react in real time as business scenarios changed. This lead to either a “lost opportunity” in terms of not having the right inventory or the right price, or “dead stock” due to wrong stocking or purchasing decision. 

 

 

supply-chain-process-challenges

This problem of not being agile and responsive to the events occurring in the marketplace can be addressed by using big data technologies. The process starts by dividing the various steps involved in supply chain automation into multiple operational windows. This facilitates the prioritizations of various decisions based on how frequently they need to be evaluated. The results of each phase in the process feeds into the decisions of the next process thus creating a positive feedback loop which makes the entire process more responsive to external events.

The process starts with Strategic planning which involves the high-level analytics process in Hadoop to baseline the data. In this we automatically calculate the various parameters which impacts the supply chain decision process. This process will generally be an iterative process, run on a quarterly or monthly schedule, based on the type of business. The metrics from previous period will feed into this process and the performance of various parameters is evaluated and tweaked accordingly.

The next phase involves tactical decisions making, where various decisions regarding procurement and transfers are made based on the parameters and demand forecast. In this phase decisions related to what to buy, when and from which vendor are made. The decisions on how to stock a multi-echelon distribution network is also made in this step.

After this step, the next phase involves continuous evaluation of the performance of the supply chain and making tweaks to the inventory placement, the price at which to sell etc. These techniques of near real-time decisions are also referred to as “Demand Sensing”.

optimization-cycle

The Details

Strategic Planning:

  • Inventory categorization: In this part, various methodologies of categorization of Inventory is used. This includes FNS classification, Order frequency analysis, Price sensitivity analysis.
  • Multivariate clustering: The various parameters which influences the demand are then automatically evaluated by creating clusters using techniques like Principal Component Analysis and other clustering models.
  • Determining best-fit algorithm: Each Item in the inventory has a different demand pattern, it could have a trend, seasonality etc. The model which will be the best to forecast the demand would vary for Items in different clusters. The best model is identified and stored for forecasting.
  • Multi-echelon network calculation: If the company has multiple warehouses which form a part of the distribution network, we need to determine the best strategy of roll-up and aggregation for each Item in the network.
  • Supply chain parameters: The various parameters which influence the procurement and transfers are calculated based on the demand pattern and historical receiving performance.

Tactical Planning:

  • Demand forecast: The demand forecast for the various Items in the inventory for the selected period. The best-fit algorithm and clusters determined in the Strategic planning process is used to calculate the forecast.
  • Procurement plan: The projected demand and the forecasted inventory position in the period is used to calculate the procurement plan. The historical performance of the vendor is used to determine the date of order and the quantities. The EoQ, Safety Stock and other inventory parameters are used to create the procurement plan for the period.
  • Inventory transfers: For a distribution network, the stock placements at various locations are calculated and the transfers are created.

Demand Sensing:

  • The most crucial aspect of the big data architecture is the ability to respond to changes in the actual sales and adapt the strategy to it.
  • The “Lost sales” can be tracked and compared against the forecasted sales to evaluate a under or over-demand scenario. If the demand is more than the forecasted sales, the Purchase orders can be expedited to meet the unexpected demand. This can also lead to decisions to internally transfer inventory across various locations (Inventory levelling).
  • The price sensitivity determined during the strategic planning phase can be used to increase lagging sales. It can be decided to run promotions to boost the sales to the expected values.
  • Some of the variations in supply chain, like delay in shipments by vendors can be handled by either inventory levelling or expediting other POs on order.
  • The advanced feature of Text analytics can be used to forewarn of potentials disruptions to the supply chain and precautionary steps can be taken to avoid any impact to the Inventory.

automation-workflow

Conclusion:

The new age of data science and big data technology opens new vistas for automating the hitherto manual process of supply chain optimization. Technologies like Hadoop enable working with SKUs running into millions of counts and historical data running into several years with billions of transactions. The integration of machine learning libraries in tools like Spark has brought predictive analytics into the mainstream.  Latest Lambda and Kappa architectures enable streaming processing of near real-time data and creation of predictive models which can respond to changes in business patterns. The above process can greatly improve the performance of the supply chain and thus the overall business.

Video of the webinar :-

Contact us at feedback@bistasolutions.com for a free evaluation of how big data can be leveraged to provide you a competitive advantage.

 

How to Launch your own Magento 2 store

Launch your own Magento Store

To Download the latest Magento 2 from the Magento Ecommerce website.

Please refer the link: www.magento.com/download

Prerequisites for Magento 2 are as follows:

  • Apache 2.2 or 2.4
  • PHP 7.0.2, 5.6.x or 5.5.x (PHP 5.4 is not supported)
  • MySQL 5.6.x

Assuming you have the Apache, PHP and MySQL ready. And it matches the prerequisites mentioned above for e-commerce Magento shopping sites.

Let’s Start:

STEP 1: Extract the Magento 2 folder for Magento store and make it accessible through the web server. Recommended to put it in Html folder if its apache2.4 or you can create a virtual host and make Magento 2 accessible from anywhere .

STEP 2: Place the Magento 2 accessible URL in your web browser. Follow the Magento 2 installation instructions, suggested step by step that are displayed on your browser. Now, you will be able to successfully install a Store and an Admin panel of your Magento website.

WE RECOMMEND: During the installation Magento 2, check for few PHP and apache libs to be pre-installed before installing Magento 2. Make sure you have them ready, else you will not be able to proceed further with your installation.

Magento 2 setup will guide you through.

STEP 3: At the end of the installation Magento 2, Asks for magneto admin unique URL. Make sure you remember it or make a note of it before you proceed ahead along with the admin Username and Password.

Installing Magento 2 is one aspect but setting up a store for your product is entirely different. There are a couple of things that you need before you launch your online stores like Products, Categories, Store URL, Secure URL, Email Configurations, Contact Detail, Payment Gateway and Shipping method details. Magento is one of the best e-commerce platforms.

You can send us your comments on feedback@bistasolutions.com

Automation testing – Myths and Realities

It is always very important to analyse what purpose does a particular technology serve before adapting it into your organisation.Even though Automated Software Testing has several known advantages like High productivity,Faster Regression,Quick Feedback to Development teams,Increased ROI to name some, not all organisations can adapt to Automated Software Testing and replace Manual Testing.A Lot of Testers have superstitious beliefs that Automation Testing is better than Manual Testing and the former testing can replace the later,however, this is true only in a few circumstances.A testing team should be aware of the Myths and the Realities of the Automation Testing and then jump to accept it.Here are a few misnomers of Automation Testing followed by their realities.

#Automated Software Testing is Fast! -Myth

#Well Automation testing does consume time! – Reality

Automated Software Testing can help the organization in a big way when used in the right way and with the right set of expectations. But for this to be possible, we have to put in some time, money and most importantly patience.Testers need to understand the domain, the test cases to be automated and then choose a framework accordingly to Build automated scripts. This will result in strong foundation building for further challenges to come.

The amount of efforts to be put in for Automated Software Testing is equal to the amount of efforts that are put in for developing an application which needs thorough validation. Automation testing scripts must be scrutinized properly keeping every possible set of test data under consideration which also includes negative testing. Failing to do so and handing over a partly tested tool consequently leads to failure of automated scripts during execution, as a result of which you tend to lose confidence in the tool…

#Automated Software Testing is a Replacement for Manual Testing! – Myth

#Automation Testing does prove to be better than Manual Testing,but not always!-Reality

Just the way robots cannot replace humans on earth , automated machine testing will never be able to replace the manual testing capabilities completely . Rather it is unrealistic to believe that automation testing is a replacement for manual testing. A project will always need a human brain to analyze the test results for applications that are unstable and change frequently.In this case, automation testing is used only as a reference and not a replacement.Automation testing is best suited for applications which are static, independent of other modules and needs to be checked during regression testing or for applications whose development is complete.

#Automated Software Testing has Quick ROI! -Myth

#Automation Testing’s ROI is a long term return! – Reality

While implementing Automated testing Solutions ,apart from just writing the test scripts there are also a few interrelated software developmental tasks that are involved.First of a framework that can support the testing operations has to be developed , which a huge task in itself and will require highly skilled people to work on it.However even if a team decides to us a fully developed framework , the initial test case checks will take more time than manually executing the test.So if an application is still in the developmental stage and requires quick feedback , Test Automation is not the right action. The ROI of Automation Testing is, therefore, a long run action plan.

#Automated Software Testing hold good for any Test Case Scenario! -Myth

#Automated testing at GUI layer is always a critical problem! – Reality

Automation Testing to check the process flows, to check user experience with the application or to check the integration with 3rd party application works considerably well.But when it comes to using Automation Testing for checking the functionality of a GUI of a system this will have a setback.GUI of a system undergoes frequent changes in their designs and usability,Although the functionality of the UI remains the same and this is the reason why Test Automation for a UI constantly fails.Having Automation Testing applied on UI is also slower in speed and so is the feedback to the developers from Automation testing.

#Expecting Cent % Automation without any Failure! – Myth

#Executing Automated Software Testing without a failure is practically Impossible! -Reality

There can be several reasons why test scripts (software testing program) can fail in their execution.Be it due to data variation or environment issues(down), or network issues (failure),or changes in the UI failure of Test cases cannot be ruled out.

Conclusions :

Automation Testing is undeniably a prime strategy for any Testing team yet not all organisation sail through in adapting it. This can be addressed by taking care of the following points :

  • Before adapting Test Automation first do some homework on understanding the application which has to be automated,this will help in setting the right deadlines and expectations.

  • Discuss and decide with the team as to what are key areas that need to be automated.

  • Automation Testing is only for stable and developed applications and not for those applications that keep changing from time to time.

  • Do not be afraid of tests that are constantly giving wrong results instead keep faith and aim at a clean and reliable test suite.

Please Feel Free to write your feedback to us on how this blog helps you with understanding the Realities of Automation testing on – feedback@bistasolutions.com

5 Statistical Methods For Forecasting Quantitative Time Series

Times Series Algorithm

Time is one of the most important factors on which our businesses and real-life depend. But, technology has helped us manage time with continuous innovations taking place in all aspects of our lives. Don’t worry, we are not talking about anything which doesn’t exist. Let’s be realistic here!

Here, we are talking about the techniques of predicting & forecasting future strategies. The method we generally use, which deals with time-based data is nothing but “Time Series Data” & the model we build IP for that is “Time Series Modeling”. As the name indicates, it’s working on time (years, days, hours, and minutes) based data, to explore hidden insights of the data and trying to understand the unpredictable nature of the market which we have been attempting to quantify.

Contact Us

TIME SERIES:  

The time series data used to provide visual information on the unpredictable nature of the market we have been attempting to quantify and trying to get a grip on that.

An Ordered sequence of observations of a variable or captured object at an equally distributed time interval. Time series is anything that is observed sequentially over time at regular intervals like hourly, daily, weekly, monthly, quarterly, etc. Time series data is important when you are predicting something which is changing over time using past data. In time series analysis the goal is to estimate the future value using the behaviors in the past data.

There are many statistical techniques available for time series forecast however we have found a few effective ones which are listed below:

Techniques of Forecasting:

    • Simple Moving Average (SMA)
    • Exponential Smoothing (SES)
    • Autoregressive Integration Moving Average (ARIMA)
    • Neural Network (NN)
    • Croston

METHOD-I: SIMPLE MOVING AVERAGE (SMA)

Introduction:

A simple moving average (SMA) is the simplest type of technique of forecasting. A simple moving average is calculated by adding up the last ‘n’ period’s values and then dividing that number by ‘n’. So the moving average value is considered as the forecast for the next period.

Why Do We Use SMA?

Moving averages can be used to quickly identify whether selling is moving in an uptrend or a downtrend depending on the pattern captured by the moving average.

i.e. A moving average is used to smooth out irregularities (peaks and valleys) to easily recognize trends.

SMA Working Example:

Let us suppose, we have time series data, to have a better understanding of SMA, Where, we have the graphical view of our data, in that we have twelve observations of Price with an equal interval of time. After plotting our data, it seems that it has an upward trend with a lot of peaks and valleys.

Conclusion: The larger the interval, the more the peaks and valleys are smoothed out. The smaller the interval, the closer the moving averages are to the actual data points. The SMA deal with historical data having more and more peak and valleys. Probably it would be stock data, retail data, etc.

METHOD II: EXPONENTIAL SMOOTHING

Introduction:

This is the second well-known method to produce a smoothed Time Series. Exponential Smoothing assigns exponentially decreasing weights as the observation gets older.

Why Do We Use Exponential Smoothing?

Exponential smoothing is usually a way of “smoothing” out the data by removing much of the “noise” (random effect) from the data by giving a better forecast.

Types of Exponential Smoothing Methods

  • Simple Exponential Smoothing: –

If you have a time series that can be described using an additive model with a constant level and no seasonality, you can use simple exponential smoothing to make short-term

forecast.

  • Holt’s Exponential Smoothing: –

If you have a time series that can be described using an additive model with an increasing or decreasing trend and no seasonality, you can use Holt’s exponential smoothing to make

short-term forecasts.

  • Winters’ Three Parameter Linear and Seasonal Exponential Smoothing: –

If you have a time series that can be described using an additive model with increasing or decreasing trend and seasonality, you can use Holt-Winters exponential smoothing to make short-term forecasts.

Graphical Views:

Exponential Smoothing:

Here, we have an alpha value that is smoothing constant and this method is called the simple exponential smoothing method which considers the other two factors as constant (i.e. Seasonality & Trend). Double’s (Holt’s) Exp. Smoothing & Winter’s Exp. Smoothing Methods dealing two factors i.e. Trend and Seasonality (i.e. Beta & Gamma).

Conclusion: Larger the alpha, the closer to the actual data points and vice versa. This method is suitable for forecasting data with no trend or seasonal pattern (alpha = Smoothing Constant).

METHOD-III AUTOREGRESSIVE INTEGRATED MOVING AVERAGE (ARIMA)

Autoregressive Integrated Moving Average (ARIMA):

A statistical technique that uses time series data to predict the future. The parameters used in the ARIMA are (P, d, q) which refers to the autoregressive, integrated, and moving average parts of the data set, respectively. ARIMA modeling will take care of trends, seasonality, cycles, errors, and non-stationary aspects of a data set when making forecasts.

Understanding ARIMA Model in General Terms: –

How to Understand ARIMA model?

To understand this, we can refer to a real-time scenario that is a sugar cane juicer, from the juicer it is difficult to extract all the juice in one go, so the shopkeeper repeats the process several times till there is no more juice left in the residual. That’s how ARIMA works, the idea with ARIMA models is that the final residual should look like white noise otherwise there is juice or information available in the data to extract.

How Do We Use ARIMA Model?

ARIMA checks stationarity availability in the data, the data should also show a constant variance in its fluctuations over time. Getting the proper information about the parameter used in ARIMA is based on the “identification process” which was purposed by Box-Jenkins.

When Do We Use ARIMA Model?

As we all know ARIMA is mainly used to project future values using historical time series data. Its main application is in short forecasting with a minimum of 38-40 historical data points with a minimum number of outliers. If you do not have at least 38 data points, then it is advisable to look for some other methods.

Working Example of ARIMA

Here, we are trying to understand ARIMA using quarterly European retail trade data from 1996 to 2011. The data are non-stationary, with some seasonality, so we will first take a seasonal difference. The seasonally differenced data are shown in Fig. These also appear to be non-stationary, so we take an additional first difference and maybe next if required. Shown in Fig.

As we considered the seasonal ARIMA model which first checks their basic requirements and is ready for forecasting. Forecasts from the model for the next three years are shown in Figure. Notice how the forecasts follow the recent trend in the data (this occurs because of the double differencing).

Conclusion: – It works best when your data exhibits a stable or consistent pattern over time with a minimum amount of outliers.

METHOD-IV NEURAL NETWORK

Introduction:

ANN: – Artificial neural network (ANN) is a machine learning approach that models the human brain and consists of several artificial neurons. Their ability to learn by example makes them very flexible and powerful.

Why Do We Use Neural Networks?

Neural networks have the strength to derive meaning from complicated or imprecise data, and most of the time can be used to detect patterns and trends in the data, which cannot be detectable easily by the human eye or any computer techniques. We also have some of the advantages of NN like Adaptive learning, self-organization, real-time operation, and fault tolerance.

Applications of neural networks

Now a day, in every field NN is equally important, for example, in some of the fields I have listed below: –

  • Sales Forecasting

  • Industrial Process Control

  • Customer Research

  • Data Validation

  • Risk Management

  • Target Marketing

Conclusion:

We can use NN in any type of industry and get benefits, as it is very flexible and also doesn’t require any algorithms. They are regularly used to model parts of living organisms and to investigate the internal mechanisms of the brain.

METHOD-V CROSTON

Introduction:

Its modification of exponential smoothing for sporadic demand product time series was suggested by Croston in 1972. The core value of this method is not only the estimation of average demand volume but also the estimation of time interval length between two non-zero demands, a term called as intermittent demand.

The Croston method works in two steps, First, separate exponential smoothing estimates are made of the average size of demand. Second, the intermittent demands are calculated. This is then used in a form of a constant model to predict future demand.

How Croston’s Work?

Croston’s has a complex formula, however, the output is very simple. The screenshot below explains what Croston’s does in a very simple way for the sake of understanding.

Above is the 12-month average vs. Croston’s vs, while below is the 5-month average vs. Croston’s.

As you can see, Croston removes the periods that have no demand only averaging the periods that have demand. Next Croston calculates the frequency of the demand. The math behind this is complex, but the output is extremely similar to performing exponential smoothing.

Why Do We Use CROSTON?

In the given fig. we have two Croston’s forecasts based on demand histories, with more non-zero data points. Here Croston’s will come into the picture and show its benefits.

  • At the very beginning, Croston starts detecting cyclic and periodicity in the data points of demand patterns. In this case, it is suggested that demand could occur possibly after a 3.5 (4 after roundup) zero period.

  • The second most important thing which Croston does is, adjusts the next occurrence from the last non-zero period if the recent periods are zero periods.

So the objects of the forecast are predicting the consumption at the right moment with the right quantity. Croston does try to predict the “right moment”, which is more sophisticated than the moving average.

Conclusion:

The Croston method is a forecasting strategy for products with intermittent demand. In the univariate forecast profile, choose forecast strategy.

Croston’s can be easily emulated with exponential smoothing and any timing benefit is usually adjusted by order lot sizing, and or safety stock in supply planning. Therefore, demand history must not only be lumpy but must also be very low for Croston’s to be of value. Therefore, Croston’s can be seen as a specialty forecasting method that provides value in certain limited circumstances.

For more information on the Statistical method for forecasting or any such type of implementation, you can simply reach out to us at sales@bistasolutions.com. If you’d like to implement software with forecasting tools for your business, get in touch using our contact form.

Data Selection, Gathering and Preparation for Demand Forecast

Data Selection, Gathering and Preparation for Demand Forecast

Data Selection, Gathering & Preparation for Demand Forecasting

Usually, it’s been observed that the database from where the report is fetched contains a collection of mixed data which includes data used for processing the software, data that contains the configuration values, transactional level data, and so on. So, selecting the right kind of data and gathering it together to give a relevant output on which the next step (i.e. FNS Segmentation) can be applied plays an equal role in better demand forecasting.

Data preparation:

Continuing our previous example, let’s say for demand forecasting for Mint Candies we had to choose all the data available in the backend. Under this condition, for e.g. fields like the name of a salesman who sold these candies and the vehicle information in which it got shipped will be extra information that might not be useful in forecasting the sales for Candies. And also every time a large chunk of data would be synced will result in performance or slowness issues for fetching the data from the report. So, the first thing we need to take care of is about selecting the exact useful data for processing reports.

The next feedback we had it from one of our existing clients who had a common business scenario. Let’s understand it with our example, so now our company has already three existing products i.e. Mint Candies, Bar Chocolates, and Luxury Dark Chocolates. But to progress further, a new product has been launched in the middle of the financial year e.g. Jelly Beans. And by applying the same pattern and FNS Segmentation even Jelly beans started showing their progress in sales. But, at the end of the financial year when we will evaluate all our products than the new product Jelly beans even after making a good sale, it would project low sales at the end of the financial year report. The reason behind this would be an introduction of the new product at mid the year compared to the existing product. So, the next thing we need to take care of is tracking the product from the date it has been introduced in the market or the warehouse.

data-preparation

 

Seasonality and Trend:

Moving further in the analysis, we got to know that the product has to be tracked in Season wise like during the times of festivals, regular days, etc., and also based on customers’ tastes certain products do well in one part of the country, and at the same time doesn’t go well in the other part. So, we also need to take care that the products have to be tracked in a geographic way as well. There are many measures for tracking like at the Customer level, Market level, Shop wise, and at times at the Hub level as well. Also, if we expand our example horizon-wise for our industry like Chocolates, Soaps Chips, etc. then under this situation it will become necessary to track our products through their categories as well. So that the performance of each line of business can be tracked.

Segmentation:

Last but not the least, it’s like a trend which now a day most of the industry is adopting. It’s about the Segmentation of data, which ideally means dividing the customer into a set of groups based on their buying pattern and lifestyle and then taking any business step by focusing on a certain group of customers. A simple example would be “A group of customer who buys Luxury Dark Chocolates frequently” can be treated as a Platinum group of Customer and to motivate them, more certain discounts or Value Added Services can be provided to them; to keep them engaged with the product sales. So, to continue or to improve product sales, we need to take care of Data Segmentation as well.

We hope our experiences would help in some way in optimizing or directing your business at any given point in time. Like always, we would like to conclude with; if you like any of our advice or suggestion or if you are looking forward to any of such implementations then you can mail us at sales@bistasolutions.com  or contact us here.

How Principal Component Analysis can reduce complexity in demand forecast when you have too many predictors

predictive analytics

Organizations are facing challenges in managing their margins and keeping up with industry growth. Predictive analytics has helped organizations to be ahead of the competition and bring value to their customers. There are many organizations that have used predictive analytics across departments which have helped them increase market share, cut cost, and retain customers while maintaining healthy margins.

One of the most challenging fields in predictive analytics is demand forecast or demand planning. What is the demand for my product in the market and how much inventory do I need to keep in stock to avoid over/under stocking, these are two critical questions organizations must answer today.

The key factor while forecasting demand is to list down variables that are going to impact the forecast. There has been a great demand for macroeconomic forecasts using many predictors to be able to produce accurate forecasts. Whether ignoring or considering all these relevant variables would definitely influence forecasting accuracy and may result in suboptimal forecasts. Therefore statisticians have been developing effective ways to utilize the information available among these predictors to improve the performance of forecasts.

The principal component analysis is one of the methods that identify a smaller number of uncorrelated variables, called “principal components”, from a large set of data. The objective of principal components analysis is to simply obtain a relatively small number of factors that account for most of the variations in a large number of observed variables

Let’s look at an example –

Say we want to analyze customer responses to several characteristics of four types of candies ( Dark, Caramel, Mint, Bar): shape, size, texture, color, packaging, smell, taste, and price. This step is known as product classification (refer picture a)

picturea

 

We need to determine a smaller number of uncorrelated variables which will help in reducing the complexity while forecasting demand. Principal components analysis will allow us to do that. The results yield the following patterns (refer to picture b):

  • Taste, smell, and texture form a “Candy quality” component.
  • Packaging and shape form a “Desirability” component.
  • Size and price form a “Value” component.

pictureb

This way we can reduce the number of variables and can use these three variables as input for demand forecast analysis that will determine how many candies we will be selling for a particular month/quarter based on historical data. Wants to know more in detail? contact us today!

7 ways Big Data can dramatically change Supply Chain Analytics

7 ways Big Data can dramatically change Supply Chain Analytics

The requirement for managing an efficient supply chain has always been a balancing act between maintaining high service levels and a healthy inventory turnover ratio. There has been numerous studies and research conducted over the years to address the critical issues facing supply chain practitioners. There have also been many software applications and packages which have been custom-built to ensure that “lost-sales” or “stock outs” do not become a sore point in sales review meetings. This has been mostly done at the expense of low inventory turns and overstocking of parts.

The latest developments in big data technology, which is sweeping across many industries and bringing in huge competitive advantages, can be applied equally reliably to address the challenges faced by supply chain professionals. Big data gives the industry an unprecedented power by bridging both structured and unstructured data and presenting information at the practitioner’s fingertips for quick decision making and insights. The following are some major game-changing rules which big data can bring to the practice of Supply Chain analytics.

advanced_analytics

1. Leveraging large Volume of Data: A lot of companies have large volume of historical data running into multiple years, or even decades, in some instances. Hadoop’s distributed storage architecture along with compression technologies like Parquet, Avro and ORC enables efficient storage with very fast access. Thus the huge volume of data, which hitherto was not leveraged to its fullest extent, can now be effectively used for advanced analytics.

Blending_unstructured_data

2. Blending unstructured data for deep intelligence: The availability of NoSQL databases like HBase and Cassandra in the big data landscape enables analytics of unstructured text data which has not been possible until now using legacy Analytics and forecasting packages. This means that information from XML sources for product catalog or web services from suppliers can be integrated in the supply chain decision making process.

advanced_machine_learning_algorithms

3. Advanced analytical models: The Big data community has developed very advanced machine learning algorithms which can be leveraged to used advanced analytical models for forecasting of demand and planning of procurement. Tools like Spark with it’s Machine Learning library (mllib) and R integration in SparkR enable very advanced models to be used on time-series and other data for accurate forecasting and prediction

text_analytics

4. Text analytics: In addition to structured data stored in systems like Hive and semi-structured data stored in HBase, there are numerous tools in the big data toolbox like Elasticsearch and Apache Solr which opens the doors to analyzing text data in various systems. The enormous amount of Textual data can be utilized to gather additional insight about Product feedback, quality and other metrics which can feed into supply chain planning for additional improvements.

ETL

5. External data source blending: External data can add a lot of value to demand forecasting or lead time prediction by leveraging real-time information. The advancement in Big Data technologies enables the supply chain software to respond to our ever changing world in a dynamic manner. Hadoop has been successfully used as an ETL tool to unify such disparate data. The data from such external systems can be used to identify potentially new suppliers with better lead times and prices
agility_in_response

6. Agility in response: Some of the big data components like Oozie, Sqoop, Flume, Kafka and Storm bring the capabilities of doing procurement in real time rather than periodically. These features makes the company’s supply chain more Agile to respond to a spike in demand, a delay in shipment or a sudden requirement in one of the components in a multi-echelon network.
automated_decisions

7. Automated decisions: Gone are the days where supply chain professionals would glean over information in multiple spreadsheets and make procurement decisions. Deep learning systems based on neural networks can now take automated actions based on previously learned data. Moreover these algorithms can get smarter over time by comparing the response against the actual results. If you wish to know more information then get in touch with our team. 

Reasons why do we need to segment and classify product inventory when forecasting demand

ERP System in Inventory Management

Introduction

Inventory management and forecasting demand the products has always been a difficult job especially in the product-driven industry. We had often came across Client who finds more difficulties in proper visibility management and product forecasting and based on the same, making a Purchase and Stocking process becomes inaccurate. Some of the major question arises are:

  1. How many inventories should be maintained for a particular product?
  2. How to forecast sales of a particular product, as the different product has different sales behavior?
  3. How would the inventory be utilized effectively?

At times there’s a demand for Product A but it cannot be purchased or stocked, as the warehouse is already full with other products inventory which is low on demand and had utilized the space in the system. To understand the concept let’s take help of an example i.e. A Fast Moving Consumer Goods Company (FMCG).

Let’s say an FMCG company has some of the products like Mint Candies, Bar Chocolates, and Luxury Dark Chocolates. And the company has an inventory of all their products in equal quantity.

 

segment_blog_1Graph 1

Diagram 1:

As seen in the above graphs initially if all the three products are kept at the same inventory level, it was observed that Mint Candies were giving more sales then the other two chocolates. So in the ideal situation, according to supply chain fundamentals the decision would be taken is to Increase the inventory of Mint Candies to drive more sales to the Company. But the biggest challenge would be the Luxury Dark Chocolate sales and the space utilized by it which will resist in increasing the inventory of Mint Candies.

So what is the solution?

Bista Solutions had come up with an implementation for one of our esteemed Client by providing FNS Segmentation as a solution which not only helps them in effectively manage their Inventory but also forecasts accurate sales. In this solution, based on the Sales orders history we had segmented the products in 3 categories, i.e.

  1. Fast moving
  2. Normal moving
  3. Slow-moving products

In our example, for better product classification we will associate Mint Candies as Fast moving, Bar Chocolates as Normal moving and Luxury Chocolates as Slow moving. So while stocking the Inventory will give the maximum share to the Fast moving product and least to the Slow moving product. By following this strategy inventory classification would be improved and warehouse utilization with the exact inventory issue will be reduced. Further, post-segmentation analysis it was also observed that the sale pattern of the product i.e. the number quantity sold in per sale order was equally important to be considered. From season to occasion the sale quantity shows a deviation which needs to capture as well for accurate forecasting.

So the same is achieved by determining the Average Demand Interval (ADI) which is calculated as,

segment_blog_3 Formula for calculating

 

With the help of ADI, we further segmented the Products into,

  1. Consistent

  2. Erratic

Now we have products segmentation along with its selling behavior tracked, which means if a product is segmented under Consistent then that product shows almost consistent sales quantity, whereas under Erratic there’s always a drastic difference with inconsistent selling pattern. So now we have final Product segmentation as,

  1. Fast moving – Consistent

  2. Fast moving – Erratic

  3. Normal moving – Consistent

  4. Normal moving – Erratic

  5. Slow moving – Consistent

  6. Slow moving – Erratic

Based on our previous example the new solution would be,

segment_blog_2 Graph 2

The above bar graph represents inventory allocation as per the demand based on the segmentation and fast moving products have got a good amount of share which will definitely add up to the efficient sales performance. The next graph represents the Selling patter or behavior of the products where we can term Mint Candies as Fast moving – Consistent, Bar chocolates as Normal moving – Consistent and Luxury Dark Chocolates as Slow moving – Erratic segmentation.

Hence Segmentation and proper Classification of products help in proper Inventory Optimization, Demand Planning and in getting a decent accuracy while forecasting product demand. With the successful implementation, we are confident that we can bring value to your business as well. If you are looking for similar implementation, you can simply mail us on sales@bistasolutions.com

RODE (Ramco OnDemand ERP) – A BOON, pocket friendly ERP solution.

Ramco On Demand ERP or ERP on Cloud is the right solution for any organization’s enterprise needs because it takes the full power of ERP and places it on the cloud.Most importantly you do not need to put in any investment on new hardware or time on training, or hire any additional IT staff because it is a delivery of application (ERP) via Internet.The automated maintenance and automated upgrade feature in Ramco also free you from the worries of doing these things manually on a regular basis. Additionally, Ramco has simple installation procedure to be followed and can be implemented in a short duration of time, this makes it cost effective also.

Ramco Systems is completely modular and offers a suite of products which are accessible over the Internet. It allows you to fetch any information from any part of the globe just by clicking one button on the browser, be it from any device like a laptop, a PDA, mobile phone or a tablet PC.

Ramco ERP Solutions are made available to the users on the subscription basis as a result of which you choose to scale up or scale down as when required which in turn helps is cost cutting. There is no need to pay any license fees or AMC(Acceptable Means of Compliance).You will be charged according to your usage nothing more than that. provides the full range of enterprise functions by providing a suite of products which are as ERP Software follows – Managing Finance , Aviation ,Manufacturing , Customer Relationship Management(CRM),Human Capital Management(HCM),Supply Chain Management(SCM) , Enterprise Asset Management(EAM) , Aviation M&E/ MRO ,Managing Projects, Process Control Analytics, Advanced Planning & Optimization, and Connectors. Aviation MRO Software can control your functions, plan ahead, manage smarter and deliver the desired results on time. The Enterprise Mobility Solutions is a power pack that provides you with functionalities that will entitle you to attain your potentials, accredit you with best practices in the industry and ensure you to achieve your task with maximum precision.

Enterprise Software is developed on VirtualWorks platform which is based on SOA standards. Enterprise Cloud Solutions provides a consistent, multi-layer architecture that plugin with all technologies and infrastructure platforms, consequently allowing you to incorporate with other portals, devices, and applications. This boosts you with the phenomenal increase in power to collaborate with all your business associates.