An Overview Of SAP HANA Hardware


SAP HANA is defined as a compilation of software as well as hardware prepared to process huge real-time data utilizing In-memory computing. If you want to experience the entire support of this platform, it is essential to be familiar with its hardware infrastructure. When it comes to SAP HANA SP12 training, learning about SAP HANA Hardware will support to learn the course more effectively.

SAP HANA Hardware

The SAP HANA is restricted to be installed as well as configured only by the certified partners. HP, CISCO, HITACHI, FUJITSU, DELL, and NEC are some of the leading current Hardware partners of SAP HANA. Discover SAP HANA, which can speed the implementation, drives hardware, support to entry-level systems, storage solutions and appliances are some of the major things that are included in the list.

When it comes to the hardware, there is no wonder to state that there are more than 450 certified appliances. If you want to have a deep look at the Directory of Certified SAP HANA Hardware, just click here.

In addition to these appliances, it is potential to certify some sane config with Enterprise Storage such as EMC VMAX and Violin. It is also potential to utilize any Intel server for the non-production use-cases with any config.  With these certifications, customers can enjoy fantastic flexibility. They can select the storage, vendor and networking of their own preferences. For some non-production situations, they can able to build systems that are more cost-effective. A major notable advantage is that the customer can get their production hardware appropriately provisioned. There is no doubt that incorrect hardware provision can be a costly mistake.


Ways To Scale SAP HANA

When it comes to scaling this framework into immense systems, there are two possible ways. They are scale up and scale out.

The scale-up system can build a single unit with several resources as possible. However, in the scale-out system, a number of smaller sockets are connected to a single cluster database. HANA is known as a shared architecture, hence, it is essential to have a shared storage for the purpose of data persistence.

SAP HANA need a Central Processing Unit to Random Access Memory ratio that is stable for production units. For SAP Business collection, it is fixed at 768GB per socket and for analytic use cases, it is fixed at 256GB per socket. Mainstream Intel units come with 4 to 8 sockets that mean Analytics customers can have the maximum of 2TB and Business suite customers can have the maximum of 6TB, with available hardware in a sole system.

Technical Architecture Of SAP HANA

The technical architecture of this powerful framework is very simple. The following are some of the components of the architecture:


It is based on Westmere EX platforms or Nehalem EX platform of Intel. Since HANA is an immense rack-mount unit, which includes up to 8 CPUs and 80 cores. It is a service hardware, which you can purchase off the online. For example, the DELL PowerEdge R910 along with 1TB Random Access Memory costs around $65k on their official website.

Random Access Memory

There are plenty of RAM choices matched to the CPU of HANA system. For example, twenty cores permit 256GB Random Access Memory. 1TB of RAM costs around $35k approximately.

Fast Log Storage

A preferable choice is Fusion-io ioDrive Duo, though it is very expensive. In some settings, the data and log storage are shared. The latest release of Log storage is Fusion-io ioDrive 2, which is less expensive and faster when compared to ioDrive.

Know More about SAP HANA with Free Live Webinar

Data Storage

It is sized to 4x RAM. When compared to other certified single-node config, this is the cheapest SAS direct storage. With this data storage, it is possible to power down the utilizations and perform tasks like backups. 1TB storage system costs around $15 to 20 K. when it comes to multi-node config, it utilizes some sort of shared storage system.

Note that in addition to the budget of hardware, you need to consider the cost of installation service and support contract for the pre-built system, which is offered by the certified partners of SAP HANA.

Hardware Vendors Of HANA

Several different hardware vendors are offering the service on SAP HANA systems. Mostly all the vendors offer great services. The main difference among the partners can appear in the implementation quality offered by the service professionals. Keep in mind that poorly designed as well as maintained SAP HANA units will not function well.

If you want to use ultra-high-end use-cases, it is essential to consider specific things including better networking and SSD storage. If you want HANA appliances to perform extremely well, it is worth to build with Tailored Datacenter Integration. However, it is unnecessary for standard use-cases. In these days, SAP offers an open hardware platform hence, customers can choose the vendors from multiple choice based on their preference and requirements. Fujitsu, IBM, HP, and Dell are the recommended vendors for better performance.


You May also Like to read : Important Areas Where SAP HANA is implemented

Apache Spark with Scala / Python and Apache Storm Certification types

Apache Spark with Scala / Python and Apache Storm Certification types

Apache Spark, the general-purpose framework supports a wide range of programming languages like Scala, Python, R and Java. Hence it is common to come across the question, which language to choose for the project related to Spark. However, it is something tricky to answer this question since it depends on the use case, skillset and personal taste of the developers. The scale is the language that several developers prefer to choose. In this article, we are going to get an idea about these languages for Spark.

Most of the developers eliminate Java, though they have worked on this for long periods. This is because Java is not suitable for big data Apache Spark projects when compared to Scala and Python. It is very verbose. Even for achieving a simple goal, developers need to write several lines of codes. Of course, the introduction of Lambda expressions with Java 8 reduce this issue, but still Java is not as flexible as Scala and Python. In addition, Java doesn’t provide Read-Evaluate-Print Loop (REPL) interactive shell. This is a deal breaker for several developers. With the feature of an interactive shell, data scientists and developers can explore as well as access the dataset & archetype their application effortlessly without complete development cycle. When it comes to Big Data project, it is an essential tool.

Advantage Of Python

Python remains the preferable choice for several machine-learning algorithms. Parallel ML algorithms are included only in the Mlib and that are appropriately the distributed data set. If a developer has good proficient over Python, then they can easily develop machine-learning application.

Know More about Apache Strom with Free Live Webinar

Python vs. Scala

Next, let us look into the comparison of Scala and Python. Both of these languages include some similar features. They are as follows:

  • Both are functional
  • Both are object oriented
  • Both include passionate support communities

Scala include some beneficial support than Python and they are listed below:

  • It is a static kind. It appears like dynamic-oriented language since it utilizes a refined sort of inference mechanism. That is, it is still possible to use the compiler to fetch the compile-time issues.
  • Scala, in general, faster than Python. In case you are looking the language for important processing logic, which is designed in your codes, then choosing Scala would offer better performance.
  • Since Spark is developed on the Scala, being an expert in Scala helps the developers to debug the source code when something doesn’t perform as they expect. When it comes to rapidly developing an open source project such as Spark, it is significantly true.
  • Using the language Scala for Spark project allows the user to use the current greatest features.
  • Most of the new features are first added to Scala and then import to Python.
  • In case the developers use Python for Spark codes, which is written in Scala, translation run between these different languages and environment. This situation might remain as the source of unwanted issues and more bugs.
  • Scala is a statically typed language that supports in finding errors earlier, even at the compile-time. However, Python is a Dynamic typed language.
  • With Scala, most of the unit test case code can be reused in the application.
  • Streaming processing includes the weakest provision in the Python. The initial streaming API of Python only supports the elementary source such as text file and text over the socket. In addition, still Python custom source not supports Kenesis and Flume. The two streaming output operation like saveAsHadoopFile() and saveAsObjectFile() are not existing in the today’s Python.

Learn Apache Spark with Scala/Python 

Introduction To Strom

Apache Strom is an open source and distributed real-time computing system. With Strom, it is easy to perform something for real-time processing as what Hadoop performed for batch processing. It is effortless to perform the reliable process of unbounded data streams. Strom is absolutely free and simple. It can be utilized with any type of programming language. It is a fault-tolerant, scalable and ensures that the data will be functioning efficiently. In addition, it is very simple to set up as well as operate.

Strom includes several use cases. Here is the list of some of the use cases:

  • Online machine learning
  • Real-time analytics
  • ETL
  • Distributed RPC
  • Continuous computation

About Apache Strom Certification

Most of the Apache Strom certification course comes with video-based training. The focus of the course would be to teach real-time processing of the unbounded streams of data. Since it is an open source real-time computing system, it mainly focuses on real-time analytics.

The main aim of this course is teaching the Big Data world concepts, Analytics sorts, Batch Analysis and advantages of Strom for the real-time Big Data Analytics. With the Strom certification, the candidates can have an exposure to a wide range of real-world projects related to Data Analytics.

Apache Spark With Scala / Python And Apache Storm Certification Types

The two kinds of certifications for the Spark with Python / Scala and Storm are

  • CCA500 – Cloudera Certified Administrator for Apache Hadoop
  • CCA175 – Cloudera CCA Spark & Hadoop Developer Exam

The training companies provide an efficient training program that will be useful in preparing for Apache Spark with Scala / Python and Apache Hadoop certification types. With these certifications, candidates will be proficient in the essential skills like Machine Learning Programming, Spark Streaming, Shell Scripting Spark, GraphX Programming and Spark SQL.


Apache Spark Course Details

You May also Like to read : All you wanted to know about Apache Spark and Scala

What Is Apache Spark Getting Started With Apache Spark


Undeniably, plenty of professionals are willing to learn Spark since it is the technology which is developing the Big Data and Analytics world. Are you willing to learn the fundamentals of Apache Spark? Then, continue to read this article. It provides useful information for those who are getting started with Apache Spark learning.

What is Apache Spark?

Apache Spark is defined as a rapid and flexible in-memory data processing engine along with expressive and elegant application programming interfaces in Java, Python, Scala and R. This  permits data workers to effectively run machine-learning algorithms, which necessitate rapid interactive database access. Apache Spark on Hadoop YARN allows deep incorporation with Hadoop and several other Hadoop facilitated workloads in the organization. The following are some of the features that enable the users to be more productive:

  • Supporting machine learning, stream, real-time and batch. It fetches workloads within a single framework.
  • Performing in-memory processing when possible. It results in faster execution for medium to large-scale data.
  • Provides a high range of abstraction when compared to Java MapReduce application programming interface with developer’s choice of language. At present, developers prefer to use Java, Python, or Scala.

Learn Apache Spark

Getting started with Spark

The position of big data analysis skills has transformed over a couple of years. Hadoop has dominated the several number of tools. If you are new to the field of analysis, then it is worth to have a look at those tools in addition to the Hadoop. Hadoop is 11 years old and it was astonishing at the time of introduction. However, after considering today’s requirements, Hadoop has the following setbacks:

  • Security concerns. That is the Hadoop’s security model is disabled due to complexity. Because of this the user’s data could be at huge risk. Also it is missing the encryption at the storage and network levels which is a major setback for government agencies and others who prefers to keep their data under wraps.
  • Hadoop is not fit for small data. Due to its high capacity design,the Hadoop Distributed File Systems(HDFS) lacks the ability to efficiently support the random reading of small files.
  • It necessitates several numbers of code to create even the simplest tasks.
  • The boilerplate amount is crazy.
  • Even for the working of the simple single node installation, it requires too many configurations and processes.

A new Tool set

Providentially, there are many new tools developed to solve numerous of these issues, like Cloudera’s Impala, Apache Drill, Spark, Shark and proprietary tools like, Splunk, etc. However, the Hadoop platform is suffering from an excess of bolt-on components for performing particular tasks. Hence, it is simple for the groundbreaking tools become lost in the waddle.

Apache Drill and Cloudera’s Impala are both abundantly parallel query-processing tools, which is designed to execute the available Hadoop Data. They offer the user rapid SQL queries for HBase and HDFS with the theoretical option of merging with other input formats. Unfortunately, it is hard to find people who managed this effectively with Cassandra.

Several number of vendors are available today to provide a wide variety of tools that are mostly cloud based premises, which need the user to send them their data that shows you are capable of running queries by utilizing their User Interfaces and Application Programming Interfaces.

Apache Spark with its SQL query application-programming interface offers an in-memory distributed analysis platform. Users deploy this framework as a cluster. They submit tasks to it, as like what they  would do with Hadoop. Apache Shark is fundamentally Hive on Spark. Shark access the users’ existing Hive Meta store. It executes the queries on their Spark cluster. Several experts announced Spark SQL would substitute the Shark tool.

Hope the above discussion would give you an idea on which tool to choose. Most of Hadoop vendors accepted that Spark is an ideal replacement for Hadoop. Hadoop connectors include many investments called Input Formats. These all can be leveraged with Spark. For example, still it is possible to use Mango Hadoop connector in Spark. No matter it was specially designed for Hadoop. Spark supports both streaming and batch analysis that means users can use one framework for their batch processing and real-time use cases. This functionality is not possible with the Hadoop tool. The functional programming model introduced by the Spark is better matched for data analysis when compared to the Map/ReduceAPI for Hadoop.

The ecosystem of Spark includes Spark Core API and four other libraries. Streaming, Spark SQL, Graph Computation (GraphX) and Machine Language Library(MLLib). These libraries run on the apex of the Spark Core API. Any application of the Spark requires Spark core and any one of the four libraries depending on the use of the application. It is hundred times faster than MapReduce of Hadoop because of its in-memory caching among computations and reduced disk space.

Apache Spark Scala Tutorial

Spark MLLib and Installation of R in Jupyter Notebook

Introduction to MLLib

Are you planning to learn Spark and searching for useful information regarding Spark MLLib, R and Jupyter? Well, this article is presented to you to get useful information regarding these.  Let us begin with the MLLib.

Introduction to MLLib

Spark Machine Language Library ( MLLib), focuses mainly on learning algorithms as well as utilities such as clustering, classification, collaborative filtering, regression  and dimensionality reduction. It can fit easily into the APIs of the Spark and can interoperate with R libraries and NumPy in Python. It is also possible to use any data source of Hadoop like HBase, HDFS or local files since it makes simple to plug into workflows of Hadoop. When it comes to performance, MLLib can support high-quality algorithms and it works hundred times faster than MapReduce. Spark shines iterative computation which enables MLLib to work fast. The high-quality algorithm in the MLLib benefits the iteration and provides better results when compared to one-pass approximation, which is used on Hadoop MapReduce.

Learn About Spark Mllib

Why to use MLLib?

MLLib is built on Spark which is a rapid general engine designed for high-scale processing. It support to write application code in a various languages like Scale, Java, and Python.

MLLib Installation

When it comes to the installation of MLLib, the only thing that you need to do is, installing Spark, since MLLib is already encompassed in Spark.

Let us look on how to install Spark 1.1.0. First, download the Apache Spark from the download link on the official website.

Download page generally includes Apache Spark Package for several famous HDFS versions. If you want to build Apache Spark from the scratch, then it is suggested to go through building Apache Spark with Maven. In the download page, just choose the Spark release, package type and download type.

Apache Spark can run on both Windows and Unix-based systems such as Mac OS and Linux. It is effortless to run Spark locally on the machine. All you want to include in your system is, Java on the system PATH or JAVA_HOME platform variable directing to Java installation. Apache Spark needs Python 2.6+ and Java 6+. Spark 1.1.0 utilizes Scala 2.10 for the Scala Application Programming Interface.

There may be a situation arise at the time of creating a machine-learning model, that is, the input dataset does not match the computer’s memory. Generally, developers use distributed computing tools such as Apache Spark and Hadoop for the computation in a bunch with several machines. On the other hand, Spark has the ability to process the input data locally on the machine at the stand alone mode. It can even able to build models once the amount of dataset exceeds the memory capacity of the computer.

Introduction to Jupyter Notebook

It is a web application, which permits the users to build as well as share documents, which includes equations, live codes, explanatory texts, and visualization. Its benefits include machine learning, statistical modeling, numerical simulation, data cleaning & transformation and much more. When functioning on a data science issue, users might need to fix an interactive platform to create and share the code with others. This issue can be resolved easily by using a notebook. A notebook can support the reproducible and transparent report. Notebooks are ideal for conditions where the user needs to integrate plain text with rich-text elements like calculations, graphics etc.

R Notebook

Nowadays, Jupyter appears as the standard key for R users. It offers the best solution when compared to other notebooks like Beaker and Apache Zeppelin. Other alternatives like R Markdown, Sweave or knitr have been more famous among the R community.

Installation of R in Jupyter Notebook with the R Kernel

  • One of the best ways to run R in Jupyter notebook is by utilizing R Kernel. If you want to run R, you will have to load IRKernel (Kernel for R, which is available at Github) in the notebook platform. You need to activate it in order to start working with R.
  • At the beginning, it is essential to install certain packages. Ensure that you do this in regular R terminal. Instead, if you do it in the RStudio console, you will get an error.
  • Next, enter a number in the command prompt to choose a CRAN mirror in order to install essential packages and the installation process will continue.
  • Then you are required to make the Kernel noticeable for Jupyter.
  • Finally, you can open the application with the Jupyter notebook. You will notice R displays in the Kernel lists whenever you build a new notebook.

Advantages of using Jupyter

The main focus is to facilitate sharing notebooks with other users. It is possible to write some code, mix that code with some text, and publish the compilation as the notebook. The idea here is to enable the user to view the code and the result of the executing  code.

Using Jupyter is an ideal way to share few experimental snippets and publish detailed reports along with an entire code set and explanations. The main advantage which makes Jupyter superior from other services is that it will extract the code output in addition to allowing code snippets posting.

Apache Spark Scala Tutorial

Introduction to Data science with Apache Spark

Introduction to Data science with Apache Spark

In general, companies use their data to make decisions and produce data-intensive services and products including prediction, recommendation and diagnostic systems. To perform this,  require some set of skills on these functions and these skills are collectively referred as data science. If you want to take your skills to the next level with Data science with Apache Spark training and certification, you have reached the right place. This article presents some of the useful information about the Data science and Apache Spark.

Introduction to Data Science

Data science is an emerging work field, which is concerned with preparation, analysis, collection, management, preservation and visualization of an abundant collection of details. However, the term implies that the field is strongly connected to computer science and database. However, in order to work effectively with Data science, several other important skills like, non-Mathematical skills, communication skills, ethical reasoning skills and data analysis skills are also required.  Data scientist plays an active role in the design as well as the implantation task of some related fields like data acquisition, data architecture, data archiving and data analysis. The influence of Data science in businesses is something more than the data analysis.

With the development of several new technologies, the sources of data has increased largely. Machine log files, web server logs, user presence on social media, taking footage of users visits to the website and several other amazing data sources have made an exponential progress of data. Individually, the contents might not appear massive, but when accessed by several number of users, it delivers petabytes or terabytes of data. Such a large amount of data not comes in the structured format always, it comes in semi-structured and unstructured formats too. This roof is considered as Big Data.

The main reason for considering big data most importantly today is for  forecasting, nowcasting  and to form models to foretell the future. Though, incredible data amount is gathered, only little amount of data is analyzed. The process of deriving information from big data intelligently and efficiently is referred as Data Science. The following are some of the common tasks included in the data science:

  • Define a model
  • Prepare and clean the data
  • Dig data in order to identify useful data for analyzing
  • Evaluate the model
  • Utilizing the model for large-scale data processing
  • Repeat the process until the best result is achieved statistically

Learn Apache Spark

An introduction to Apache Spark 

For the development of big data, Apache Spark is considered to be the most exciting technology. Let us discuss why Apache Spark is most preferred than its predecessors.

Apache Spark is nothing but a cluster-computing platform, which is designed to be general-purpose and fast. In terms of speed, the Apache Spark extends the most famous model called MapReduce to effectively provision several kinds of computations, including stream processing and interactive queries. There is no doubt that speed is essential for processing large datasets. The main features of Apache Spark are its speed and capability to execute computations in memory and the system is also more efficient than MapReduce for complex applications running on a disk.

Purpose of using Spark

This general-purpose framework is widely used for a various range of applications. The use case of Spark is classified into two categories. They are data application and data science. There are several imprecise usage patterns and disciplines in Spark. Most of the professionals utilize both the skills. Spark supports  various data science tasks with several number of components. It facilitates interactive data analysis by using Scala or Python. Spark SQL includes an unconnected SQL shell, which can be utilized to make data exploration, using SQL. Machine learning, as well as data analysis is provisioned via MLLib libraries. It is also possible to call out external programs via R or Matlab. Spark enables data scientists to handle issues with abundant data size more effectively when compared to working with other tools like Pandas or R.

Next to data scientists, another popular category users of Spark are software developers. Developers use Spark to develop data processing applications using the knowledge of the software engineering principles like interface design, encapsulation as well as object oriented programming. They utilize their knowledge to design and develop a software system, which gears the business use cases.

Spark offers an easy mode to parallelize applications across clusters. It also hides the difficulty of network communication, distributed systems programming and fault tolerance. Spark gives them sufficient control to supervise, monitor and tune applications when permitting them to implement tasks quickly. Users prefer to use data processing applications of Spark due to its benefits like simple to learn, a wide range of functionality, reliability and maturity.

 Apache Spark Scala Tutorial

Things you need to know about Cloudera CCA Spark and Hadoop Developer Exam (CCA175)


Most professionals are willing to set their career in the field of Big Data. They are well aware that business people prefer to choose a proficient CCA Spark and Hadoop Developer. This is the main reason the candidates are showing interest in choosing a certification program on Cloudera CCA Spark and Hadoop Developer. Cloudera allows the professionals to set their career in the field of Big Data by offering them a proficiency in developer certification. Based on the current scenarios, experts state that there is more demand for Big Data professionals. There are plenty of opportunities to the Big Data professionals in order to fill the shortage of the experts in this field. To address the talent gap, professionals are suggested to get CCA185 certified.

The developers of CCA Spark and Hadoop are required to show their entire developer’s knowledge to design and handle Spark and Hadoop projects. If you want to be a certified CCA Spark & Hadoop professional, then it is essential to pass the CCA175 Certification exam. A Professional can take this certification exam at any time on their system which is monitored universally. This exam involves in writing code in Python as well as Scala and executes it on the cluster.

Once the candidate clears this certification exam, he/she will be provided with a logo which can be used on their resumes, online profiles and business cards. Undoubtedly, this logo offers a branded appearance that states the candidate possesses excellent skills in Spark and Hadoop tool. The certified professional will also receive a license, which authenticates their CCA status. The license serves as an evidence that the candidate possesses excellent talent in the tools and techniques of the Spark and Hadoop.


The necessity of Certification

The rapidly evolving technologies leave constant modifications in the world of Big Data. This might create a big gap between the professionals and tools. In order to survive with ever-changing developments and trends in the advantages of Big Data as well as the tools and techniques in the Hadoop and Apache Spark, the candidates are advised to  update themselves continuously. Spark and Hadoop training, support them to remain up to date in the open source environments and technologies. In addition, two years compulsory re-testing is vital to remain up-to-date with the status of technologies, tools and encounters in the Big Data leveraging. CCA175 exam is performance-based hence being certified is an efficient way to promote his/her own achievements.

CCA175 Exam Details

If a candidate is planning to take a CCA Spark and Hadoop developers Exam, it is essential to know the following  details:

  • The test includes 10 to 12 performance-based tasks related to CDH5 cluster. Candidates need to complete the test within 2 hours. In order to pass the exam, the candidates need to score at least 70 percentage of the total marks.
  • Candidates should focus on the question format followed in the certification examination. Each question requires the candidates to solve certain scenarios. They are required to use Hive or Impala tool in some cases. Some questions require candidates to write codes. Sometimes a template is provided to speed up the coding time. This template includes a solution skeleton, which asks the candidate to enter the functional code in the missing lines. This template comes in either Python or Scala language.
  • There is no compulsion to use the template; candidates can solve the questions using any language they prefer. How ever, keep in mind that solving each question from the scratch might consume more time and can exceed the allocated time.

Once the candidate submits the exam or the allocated time exceeds, immediately the evaluation process gets started. Score is emailed to the candidate as soon as possible. The performance report includes the question number for the attempted questions and the grade of that question. The performance report also shows reason for the wrong answer. On clearing  the exam, the candidate will receive the second email after a few days, which includes a digital certificate in the PDF format, license number, update in LinkedIn profile and a link to download the logo. The candidate can download and use the logo on his/her professional collaterals as well as social media profiles.

The certification exam is simple, but it needs more practice to finish the entire questions on time with the correct solution.


10 compelling features in Tableau you can’t miss!

10 compelling features in Tableau you can’t miss!



It is finally here! The Tableau 10 Software Beta version has been just launched and there are plenty of reasons why you should give it a try. From cross database joins to Tableau Mobile for Android and one click revision history, the updated features of this intelligent software are too compelling to overlook. Before you pursue a course in Tableau, get a brief recap of 10 great features of Tableau software that would make you fall in love with it.

Tableau Features

#1 – Cross Database Joins

Whether your data resides in Excel or Oracle, you can comfortably join them in Tableau software and generate an integrated data source in it. In addition, you could also create a Tableau extract of this data source for publishing it.

#2 – Tableau for Android

Tableau for Android is finally here. Now you can access all the features of your favourite software tool with a simple click of your finger. It comes loaded with everything you need to handle your data on the go. Get set for a revolutionary experience from now on as everything you require is going to be on your fingertips, quite literally!

#3 – Device Designers

The Device Designer is one of the most promising features of Tableau. Tableau software makers have made making and sharing dashboards pretty easy. From now on you can design, customize and publish such dashboards that are already optimized for accessing on phones as well as tablets. As per reports, Tableau 10 is also going to have a feature to enable one-click adjustment of fonts on the screen.

#4 – Revision Histories

Want to switch back to an earlier version of a workbook? Well, you cando it now with considerable ease as this can be done with a single click in Tableau 10. The older versions can be restored by simply downloading them and republishing. You are also allowed to set the number of revisions that can be made.

#5 – Licensing Views

You can get a peek into the licensing and desktop usage of Tableau software thanks to the newly designed administrative views. After necessary configuration has been performed, the software sends usage information to Tableau software even if you are not logged in.

#6 – Subscribe others

Now sharing becomes incredibly easy with Tableau 10. Other users can be subscribed to a particular dashboard pretty easily. The user who gets subscribed will get an update via e-mail once you do that.

#7 – Mobile Device Management

There’s another exciting feature that has been added to Tableau which isworth mentioning – Mobile Device Management. Support for MobileIron and VMWare Airwatch has been added to the tool, which makes very easy deployment across the organization.

#8 – Web Authoring

Considerable improvements have been done in the Tableau software which includes Web Authoring. This feature essentially makes it possible to do a lot on the web such as publishing data sources and author dashboards right there in the browser of your choice. In this way, your productivity gets enhanced in a big way!

#9 – Document API

Here is another spectacular feature that has just been added to Tableau software. With the Document API you can work with Tableau files such as .twb and .tds formats with considerable ease. In this way, you can work on one template workbook in Tableau and the same can be deployed across multiple databases and /or servers.


#10 – ETL Refresh

Want to work with Lavastorm and Alteryx and leverage Web data connectors? You could do that as well with the help of ETL refresh feature. With its help you could change the data-source parameters from the Tableau desktop.

Getting trained in Tableau is a great idea and updating yourself regularly about the releases becomes equally significant.

get trained in tableau

A Deeper Analysis of SAP S/4 HANA Simple Finance

The connection between financial leadership and IT capabilities of a company is enhancing increasingly tighter. A disappointment in finance can influence IT and a disappointment in IT can influence finance. Organizations cannot make board level and corporate decisions without knowing that relationship. SAP S/4 HANA is a product that is built on the SAP HANA, latest in-memory platform. SAP HANA is a well honored technology platform for speed. The ability of HANA including reinforcing stability, simplify enterprise architecture, generate big savings and scale limitlessly are equally impressive. Let us have a deeper analysis of this platform.

simple finance training

SAP HANA is a column-oriented, in-memory, relational database management unit, which is developed as well as marketed by the multinational corporation, SAP SE. Formerly, SAP S/4 HANA was known as SAP high-performance analytic appliance. This simple finance platform empowers the businesses to function in real time and to crunch Big Data including operational, transactional, structured as well as unstructured functionalities. With the features like advanced in-memory densities, unmatched processing power, and columnar storage, SAP on HANA analyze, transact and predict immediately on a universal platform.

SAP S/4 HANA Simple Finance is popularly known as the succeeding generation business group specially designed to support function hassle-free and simple in the today’s digital economy. SAP HANA can offer an adapted user experience; since, it is easy to access.

This comprehensive finance solution can be deployed on-premise or in the cloud. It can able to provide immediate intuition for professionals. It empowers non-disruptive migration, improves the portfolio of current finance solution and preserves its functional asset. Treasury management, financial risk management, financial close and collaborative finance operations are some of the financial areas, which are covered by this new business suite.

Contribution of SAP Simple Finance

  • This can enable high-speed real-time analytics across all the dimensions of finance without any restrictions.
  • It includes built-in ability to provide analysis, simulations, and prediction to validate the financial consequences of planned business options.
  • It offers optimized business processes and event-driven processes.
  • It provides non-disturbing migration path.


Benefits of SAP S/4 HANA Simple Finance for Organizations

The following are some of the notable advantages for organizations, which utilize SAP S/4 HANA as an ERP foundational key for their business functionalities:

  • Simplified master data
  • Eliminates redundant data
  • Eliminates reconciliation effort
  • Adorable user experience
  • Support processing transactions and analytics together
  • Accelerate financial close
  • Improved system performance
  • Real time analytics
  • Improved business advice with more appropriate and timely vision
  • Lower IT complexity
  • Effective Financial risk management
  • Effective working capital management
  • Reduce the cost of manual report generation
  • Reduce operational dangers from swindle and other nonconformity activities

simple finance certification

Source: SAP Simple Finance Architecture from Global Online Training

The figure shows that simple finance allows one common real-time universal journal entry for all subsidiaries. This feature guarantees enterprise-wide consistency. It also reduces reconciliation errors and time.  With SAP on HANA Simple Finance, the CO, FI and CO-PA line entries are stored at the equal granularity level and are associated with the ratio 1:1. This facilitates that the reporting is not restricted by the boundaries of applications.

What users think about SAP S/4 HANA Simple Finance on-cloud?

Source: UK and Ireland SAP user Group

Applications of SAP Simple Finance based on the deployment of in-memory cloud platform are receiving a mixed response from the users. Over 4/10 user organizations are looking to use this technology.  However, organizations who have no plans hesitate to use this platform. They consider its cost other than its benefits.

Deployment Choices of SAP

SAP S/4 Simple Finance can be deployed in three different consumption models. They are on-premise, managed cloud and hybrid.

On Premise

Some users like to run the implementation of SAP on-premise. In this method, the software is deployed in the own data center of the customers. This kind of implementation contains entire control on the systems including functional configurations and power to make an alteration of the system. The IT team of the company manages its security and governance.


Companies who are interested in the Simple Finance deployment in the cloud, they can make use of SAP S/4 HANA enterprise cloud. In this managed service, the SAP maintains the software and servers of the organizations. They install the software on the server, which is appropriate for a particular customer. It allows customers to configure their unique business processes. However, they are not permitted to make alterations. SAP is responsible for performing upgrades, implementing technical maintenance and handling hot fixes.


The hybrid model is the compilation of on-premise as well as cloud implementations.  If the customers who have not moved their entire modules of ERP to the cloud, then it is difficult for them to run in this environment.

SAP Training

Enterprises, which like to migrate to SAP S/4 Simple Finance platform, prefer to recruit SAP certified professionals. There are a countless number of SAP training companies across the world. The professionals should have complete and full-fledged knowledge of SAP simple finance. They should have knowledge of administration, development and application design of the real time projects.






An Introduction to SAP S/4 HANA – Simple Finance

There is no doubt that SAP HANA is becoming the hottest technology platform in the market of IT. More than 1200 companies from 58 countries developing applications on this platform. If you have ever considered how SAP S/4 HANA works and how it helps the client to enhance their business, then continue to read this article. Before getting deep into the SAP HANA, let us look at SAP and the importance of SAP training.Introduction to SAP HANA Simple Finance


About SAP

SAP is one of the world’s leading business software providers. Today, more than 500 companies are using SAP. SAP is abbreviated as applications, products, and systems in Data Processing. SAP includes several modules and each module signifies a business process.  Organizations show more interest in using this software to meet the requirement of their business. With the SAP features like a high level of accuracy and efficiency, people find it effortless to carry out their business processes.

The applications of SAP are accumulated in such a manner, which almost all the departments in a company carry out their activities in a unified way. Because of this effective functionality, the major software organizations use the products of SAP to process their business activities.

SAP HANA Simple Finance Training

SAP Training

Several companies provide SAP courses. If you like to have a complete understanding of extensive functionalities of SAP, it is essential to undertake training. The software professionals have to take SAP training to effectively deal with the applications of SAP in their environment. Organizations are showing more interest on employing SAP certified professionals. If anyone wants to shine in the ERP sector, it is worth to have a certified training in any of the SAP business suite.

SAP S/4 HANA Simple Finance – The Beginning

S/4 HANA, the new business suite of SAP is designed especially for the digital economy. SAP HANA was introduced by SAP in 2010. This fast database made a significant change and revolutionized the method to storage of the database record. This platform can support row-based storage as well as column-based storage. This is the main strength of SAP HANA. The main difference of these two storage methods is that row-based method stores records of tables in a row sequence while the column based method stores the data in columns. The benefits of this feature are better compression, faster data access and enhanced parallel processing that means fast ad-hoc reporting, on-the-fly aggregations.

Haroon Arshad, a solution lead finance on the implementation of S/4 HANA Finance, states that it was the time to utilize this business suite due to its capability to retrieve data as well as information quicker. With the code optimization and shifting processing logic, it is possible to save time since all the calculations are made at the level of the database itself. This enhances the large calculations like variance calculations, month-end settlements, and interest calculations.

S/4 HANA Simple Finance

SAP S/4 HANA simple finance also known as the Simple Finance/ sFIN is the first module of S/4 HANA. It includes the redesigned data structures as well as the application layer. This module includes some new concepts. They are as follows:

  • Central Finance
  • Universal Finance
  • Cash Management
  • COPA
  • Integrated Business Planning (IBP)
  • New Asset Accounting

As a result, of these new concepts, all the financial details can be obtained from the single table, called “ACDOCA”. Based on the words of SAP S/4 HANA Finance expert, Prashant Pimpalekar, this module of SAP allows bringing data from Asset Accounting, General Ledger, Material Ledger and controlling coding block and CO-PA into one journal. With this main advantage, SAP has satisfied the requirement of reconciliation (CO & FI), has satisfied the requirement of the settlement of entire cost elements and has finished away with aggregate tables and index tables.

This extensive feature has a direct influence on data presentation since it allows having the possibility to utilize the content of Fiori, BI, and SAP HANA greater in S/4 HANA simple finance when compared to previous ECC version.

Myths about SAP HANA

As Haroon said, there are several myths about the SAP HANA that are wrong. The two most famous myths are:

Myth 1: It is just a SAP appliance or database. It is wrong. SAP HANA is a complete ecosystem.

Myth 2: It is very expensive. It seems true if you consider its upfront costs; however, if you take into account the long-term return with this software, it is far countless than was you have been invested.

Things Need to know on Implementation

When it comes to preparation for implementation, people need to aware some essential facts. The implementation of this software is not going to be a traditional finance implementation. The organization and professional need to have a clear understanding of the technical architecture. If you want to have a better presentation, it is essential to examine each bit of data send to the system.

In S/4 HANA simple finance, it is essential to evaluate the detail requirements and then support the processes. Since there will be important adaptations for users, it is essential to have the change management.


5 helpful tips for Strategy Mapping in Business Analysis

Company shareholders love profits soaring hinting unbeatable growth. Markets love it when your company operates at its maximum capabilities. You may come up with an excellent strategy but even that is of no good use if you are not putting sufficient efforts in implementation of the same. You may promise moon but how are you going to achieve that? Here is some strategy mapping tips for devising a sound business strategy.


Before plunging at a Business Analysis training program, be prepared with high level understanding of the following:

  • Provide a well-defined overriding objective
  • What is going to be dominant value proposition
  • What are going to be the key financial strategies
  • What are your primary customer related strategies
  • Make a suitable plan for your learning and growth strategies


Register for a Free  BA live Webinar Now ! 


Tip #1 – Provide a Well-defined Overriding Objective

What does it take for an organization to be successful? There should be sufficient amount of clarity on what is an achievable overriding objective and what are the strategies it has in place to achieve the same. It happens to be different from the ultimate objective such as having satisfied customers, being leading service provider and low cost services and so on. For instance, some overriding objectives could be:

  • Increase in profit margin from x% to y% within z number of years
  • Increase company share price by n% by a specified date
  • Increase in total shareholder return value


Tip #2 – What is going to be dominant value proposition?

Here the company chooses the value proposition which will help it to scale the markets. The three value propositions as defined in the book: “The discipline of market leaders: Choose your customers, narrow your focus, and dominate your market” are as follows:

  • Customer Intimacy
  • Operational Excellence
  • Product Leadership

Tip #3 – What are going to be the key financial strategies?

Here are the three key financial strategies that company defines namely:

  • Revenue growth
  • Productivity
  • Asset utilization

Every organization needs to dedicate sufficient time and effort into devising strong financial strategies for moving ahead.

Tip #4 – What are your primary customers related strategies?

‘Customer is the king’ and without any doubt the end customer is the primary focus for every business. Your organization should strive hard to:

  • Add new customers and retain the existing ones
  • Increase in revenue earned per customer
  • Reduce cost incurred per customer

Each one of these strategies demands sufficient attention from your company. However, your choice of value proposition will define how much focus you decide to give to each one of them.

Tip #5 – Make a suitable plan for your learning and growth strategies

This step is all about identification of skill and learning gap and making every possible effort to bridge the same. There are three primary learning and growth areas:

  • Information capital – How well the company makes use of various databases, files, networks and other information systems to gain an edge over competitors.
  • Human capital – The economic value derived from investment in increasing the knowledge and skills of the employees of the organization.
  • Organization capital – It refers to the capability of an organization to connect the corporate goals with individual employee goals.

It takes a lot for successful strategy mapping for fetching reliable insights for trade analysis. Organizations willing to go new heights ought to make use of the same to scale new heights and outperform their competitors.

A career in Business Analysis could be lucrative and all you need to do is seek professional Business Analysis course.

Get Started with Business Analyst Training Today !


You May also Like to read : Why JAVA JEE is used in enterprise edition instead of spring?