Quantcast
Channel: Infragistics Community
Viewing all 2377 articles
Browse latest View live

How Tech is Going to Change Our Lives Over the Next Ten Years

$
0
0


iotBeing only twenty-one years old, I’ve pretty much grown up with technology. While I do remember the brutal days of dial up, when texting didn’t exist and I had to actually remember a phone number, I have been pretty thoroughly connected with technology since I was young. I received my first cell phone for my eleventh birthday a little over ten years ago—a Nokia with a tiny, blue screen, with the most advanced features on this little phone being the games ‘Snake’ and ‘Doodle’. I remember the popularity of Napster and Limewire, both being controversial yet game changing methods of exchanging content and information. I remember using AOL Instant Messenger as a way to stay connected with my friends. I remember playing PlayStation with my dad and thinking about how cool it would be to be able to somehow play against my friends even though they were not with me.

This past October for my twenty first birthday, I received what is probably my tenth cell phone—an iPhone 6 with a touchscreen and a multi-pixel digital camera that keeps me constantly connected with the rest of the world and allows me to access any information I want in just a few seconds. I am now using iTunes to buy the latest music, videos, books, games, and applications, as well as using Spotify to stream unlimited music directly to my phone and MacBook Pro. I have a social presence on LinkedIn, Twitter, Facebook, Instagram, Snapchat, and Pinterest, where I can connect with just about anyone, anywhere, anytime. I can now watch my friends play people from the other side of the world in real-time, online video games while I’m video chatting with my best friend who is stationed outside of the United States. In ten short years of my life, this is how much technology has evolved in front of my own eyes. Plus, this does not even take into account the emergence and growth of other technologies and sources of information that have changed our lives. In just ten years, the world has seen an exponential increase in the complexity, sophistication, integration, and importance of, as well as the reliance on, technology. This leads me to ask the question, what will technology be like in ten years and how is it going to change our lives?

Time Magazine recently just published an article called “This Is How Tech Will Totally Change Our Lives by 2025”, which was based off a report recently released by the Institution for the Future which listed five predictions for the ways tech is going to change our lives in the next ten years. The driving force behind how technology is going to change our lives isthe “ever-increasing hunger for data” which will “fundamentally change the way we live our lives over the next decade.” According to the Times article, “in the future, people might be able to personally sell info about their shopping habits, or health activities to retailers or pharmaceutical companies,” which means that we might possibly see an economical shift in which personal data will be able to be shared, bought, or sold with more benefits to the consumer. With the introduction of the first directional shift, the information economy, individuals will be able to choose what they do with their information, therefore possibly leading to more opportunities for both individual, and widespread, financial or social gain.

Stemming from the increased dependence on data and the shift to an information economy, we will definitely see networked ecosystems, such as the Internet of Things continue to expand. The Internet of Things is described as a network of physical objects, from basic household items to cars, which are embedded with technologies that enable an exchange of data via the internet. Could you imagine waking up one morning to your alarm clock which is connected to your smart phone, which then sends a message to your shower to turn on? Then after you turn off the shower, a message is sent to your coffee pot to start brewing a cup, which then signals to turn on the kitchen lights and flip the television to the morning news? Or how about having your refrigerator identify when you are low on milk so that it can send you a reminder on your way home from work to pick some up after you pick the kids up from school? All of these instances seem a bit foreign right now; however, we’re already seeing and experiencing the beginning of the Internet of Things. The cross compatibility of all of these inanimate “things” promises to make our lives easier, more efficient, and possibly even safer. We’ve already began to see self-driving cars start to become a reality and these will likely revolutionize not only the automobile industry, but how we get from place to place, the numbers of accidents on the roadways, insurance costs, and many other facets of our lives.  

While the Internet of Things may gather information about our daily decisions, the third shift caused by the information generation is the continued creation of increasingly sophisticated algorithms which may end up actually aiding our daily decisions at work. In the article “2025 Tech Predictions Both Thrilling and Scary”, it is stated that “tech leaders increasingly are saying that we’re moving to a world where employees will have smart decision-support systems.” These smart decision-support systems operate under what is known as augmented decision-making. Augmented decision-making is referred to by Daniel J. Power as, “when a computer serves as a companion, advisor and more on an ongoing, context aware, networked basis.” So while data analytics have given us information to base future decisions off of, what augmented decision-making and its systems do is offer real-time information, basically in the form of suggestions in order to help individuals make important decisions. Having these support systems will be like having a second opinion for everything; however, the algorithm produced information given will be optimal and will likely be more accurate than another human being would. This however, causes me to raise one important question; how long can these systems assist in decision making before they eventually replace the workers outright? While I think there are certainly fields that can benefit from this, such as the medical field where a doctor could use these tools to determine a more accurate prognosis, there are some jobs that could completely be replaced by these systems and this could have a profound effect not only on people’s lives, but also on our economy.

The fourth predicted shift is known as “multi-sensory communication.” In this shift, information will be able to be communicated and received through multiple human senses. We can see an example of this in the recently released Apple Watch. Instead of ringing or vibrating, the watch will actually “tap” an individual on their wrist to let them know when they have a text or a notification. In the Institute for the Future’s report, it expands on multi-sensory communication by stating, “in a world saturated with competing notifications, multi-sensory communication of information will cut through the noise to subtly and intuitively communicate in novel ways that stimulate our senses.” The addition of senses such as touch, smell and taste, as well as new experiences with sight and sound, will not only change the way we communicate socially, but it will also change how developers, retailers, and marketers create products and appeal to customers. What this change will ultimately mean is that we will begin to stray away from the screens we hold and physically interact with, and transition to what the Institution for the Future describes as “screenless communication tools that allow people to blend the digital and physical in more fluid, intuitive ways.” The idea of blending the digital and physical can be seen in the immersive marketing tactic that the Marriott Hotels launched in late 2014 called ‘The Teleporter.’ The Teleporter is a virtual reality experience that was programmed to allow users to “experience” and virtually transport to Hawaii or London. The technology featured a virtual headset and wireless headphones, along with several sensory triggering features such as producing heat, wind and mist, all working in sync to give the user a realistic experience of what it is like to be on the beaches of Hawaii or strolling through London. While this marketing strategy seems to be a bit over the top right now, we will definitely be seeing similar tactics being used in the next ten years both inside and outside of our homes.

After all of this talk about sharing data and personal information and how it is going to transform life as we know it, I’ll introduce the fifth, and in my opinion, the most important shift: privacy enhancing technology. With the vast amount of data that will be produced and received, there will be a demand for better tools for privacy and security. There must be a common ground found between the individual users who are concerned about their privacy and the companies leveraging their data in order to make sure that people’s information is protected and innovation does not cease because of personal exposure. There are predictions of “cryptographic breakthroughs” which will hopefully help in minimizing the number of hacks and breaches on companies and individuals, as well as continuous changes and updates to policy that will ensure protection to both the producers and consumers. It is a must that privacy is continuously advancing while the amount of data being shared concurrently increases, because if not then all of our lives will essentially become openly available to anyone, and that is terrifying.

The information generation is upon us. We have already began aggregating, analyzing and using data and information to enhance our lives. With the digital and physical worlds beginning to come together in other places besides computers and cellphones, we are in for an innovative, exciting, and possibly even scary next ten years where the only things that are certain are uncertainty and data. I’m curious to see where I am at in 2025. I’m interested in what my cellphone, if they even exist anymore, will look like and what it can do. I’m a bit nervous about the fact that policy tends to not be able to keep up with the speed of technological innovation. I am excited to see how companies will incorporate all of our senses into technology and how this might enhance and change our lives. How am I going to be using technology at home, at work, in social settings? It is going to be unlike anything that any of us have ever experienced before, but I am very interested in seeing how much technology is going to change in front of my eyes over the next ten years.


Installing Infragistics Ultimate Without an Internet Connection

$
0
0

The Infragistics Platform Installer was created to help make it easier for users to install our various products.  It achieves this by downloading the installers and updates which have been selected by the user through the UI.  In some instances, the Platform Installer may have difficulties connecting to our servers. This is caused by very strict firewall rules on the network used to run the installer. These instances may result in the following errors:

  • System.Net.WebException: The operation has timed out
           at System.Net.WebClient.DownloadFile(Uri address, String fileName)
           at System.Net.WebClient.DownloadFile(String address, String fileName)
           at Infragistics.Wrapper.Interface.Utilities.AutoUpdatesUtility.UpdateRTMDownloadUrls()
  • An error occurred while downloading . No URI for this file has been set. Please verify that there is an active internet connection and try again.
  • System.Net.WebException: Unable to connect to the remote server --->
       System.Net.Sockets.SocketException: No connection could be made because the target
          machine actively refused it

Unfortunately, in these cases, there is not much we can do.  The network would need to allow access for the installer to download files from our server, which may not be possible under certain networks. Under these circumstances, we provide an offline installer which contain the individual product installers. This enables the Platform Installer to complete the installation of our various products without the need to connect to our servers.

You can access the offline installer in two ways.  The first only applies if you’ve registered a product key to your account.  Simply access your account page to view a list of registered product keys.  Clicking a product key will display a list of “Product Downloads” associated with the product key you’ve selected.  For example, the list includes options such as, “Infragistics 2015 Vol. 1 Product and Samples” and “Infragistics 2015 Vol. 1 Complete Bundle”.  Choosing one of these options will allow you to download an “offline” version of our Platform Installer.

The second way to access the offline Platform Installer is by navigating to our Products Help & Download page, or clicking here, and click on the “Offline” link under the Package installer section.  This link will provide you with an offline Platform Installer for the latest version of our Infragistics products.
 
Note, the “offline” download includes the Platform Installer. However, some options are still available which require a connection to our server. One of these options would be to check and download the latest service release. An example of how our 15.1 installer looks is shown below:

By default, the option to download the latest service release is checked; so you will want to uncheck it.  Once complete, click the next button and complete the installation without the need to download anything extra.

If you need the service release you can download it by navigating to the My Keys & Downloads page, or by clicking here.  Simply click on the product key for the version you want and then select the Service Releases tab.

Here is a list of all the service releases for 15.1 which are available at the time of writing this post.  Simply download the one you require and install them after you have the main product installed.  That’s all there is to it!

Line Charts: Where to Start?

$
0
0

I've previously explained that it is essential that the bars of bar charts start at 0. The reasoning is simple: we use relative lengths of bars to compare values, so starting a bar somewhere else leads to false judgements. But what about line charts?

Below is a line chart with three datasets: A, B and C. We can see that:

  1. all lines are well above zero across all the years;
  2. A is roughly flat;
  3. B trends downward with a jump in the mid 1980's;
  4. C trends upwards.

Only point 1 above is enhanced by starting the y axis at 0. If we care more for trends, gradients, and the size of noise then focusing our chart around the area that actually contains data (as below) will help us to see these aspects at an improved resolution. That's true whether we're looking at different sections of one line or comparing across multiple lines.

With this improved resolution we can now see just how big the jump in the mid 1980's is for B - it's a change of 3 or 4 in Value in a single year. We can see that the upward trend in C isn't present in the early years. There might even be a hint that A trends ever so slightly upwards too. Further, while a table is the best option for displaying very precise information, this second chart is still an improvement on the first when it comes to accurately estimating values for a given year.

I've tried to make the case that it isn't generally necessary to include 0 on the vertical axis of a line chart and that there are frequently advantages to not doing so. Nevertheless, it can be useful to guide your audience away from making the assumption that the y axis does start at 0. The chart below illustrates a potential issue.

The problem with this chart is the visual metaphor of line D crashing to the bottom. Of course if the y axis started at 0 this wouldn't be a problem. But we don't need to extend our axis that far to reduce the salience of the misleading metaphor; even a little extension helps.

However, D is still fast approaching the dark(er) horizontal axis at the bottom. While the axis lines provide convenient separators between chart area and labels, they're not strictly necessary. So we can remove the x-axis line and tick marks without any loss in meaning.

Still, the labels themselves could be seen as an indicator of line D's fast approach to the bottom. Why not move them to the top?

We could probably stop there. But I like experimenting. The final change I'm going to make to this chart is more of a novel, but subtle, experiment. Rather than simply suppress one visual metaphor - the line crashing in to the axis at the bottom - we'll attempts to replace it with another. By fading away the bottom of the chart area we'll try to convey the idea that the vertical scale actually continues on downwards into the distance.

Is this last change helpful, a hindrance or neither? I'm not sure. I don't think it's particularly straightforward to implement in most charting software. Hence, one of Colin Ware's guideline for information visualization (Colin Ware, Information Visualization, Third Edition, page 24) seems relevant: "Consider adopting novel design solutions only when the estimated payoff is substantially greater than the cost of learning to use them."

So far the discussion has been centered entirely around modifying the vertical scale. The horizontal extent of the datasets has been ignored or it has been implicitly assumed that what's been visible is all there is. Frequently time series are cropped in the horizontal direction. This may seem like a dubious activity but is frequently just used as a means of increasing resolution over a specific period of interest. In the latter case this is exactly the same benefit that we saw above from reducing the vertical axis. There is, however, a notable difference. Reducing the vertical extent of a line chart will generally only reduce the whitespace. Cropping the horizontal axis reduces whitespace and removes data from view. For that reason, when you first see a line chart you have reason to distrust, perhaps the first question to ask is "Why does the x axis start there?" and not "Why doesn't the y axis start at/include 0"? Of course, when you're making your own charts you should ask yourself both of these questions.

What's New in IG TestAutomation 2015.1

$
0
0

The primary focus for Infragistics Test Automation is to offer you peace of mind. Such that when you develop with our controls, your teams can seamlessly implement automated testing of the user interface (UI) to confidently produce higher-quality releases. IG Test Automation, currently supports our Windows Forms controls via HP’s Unified Functional Testing (UFT) and IBM’s Rational Functional Tester (RFT).  We also support our WPF controls via HP’s UFT.

As IG Test Automation’s purpose is to automate tests of the UI of applications using IG controls, this what’s new will start off with the new controls. I will follow with changes in existing controls that had changes that would affect how the UI behaved, lastly I’ll list any bug fixes that were implemented this cycle.

New Infragistics Controls


IG WPF’s XamTreeGrid

The xamTreeGrid control is the latest addition to the Data Presenter family of controls. It arranges data in a tree grid layout. Essentially the control is a xamDataPresenter that implements a single view (a tree view) which cannot by dynamically switched.

 XamTreeGrid Example

 

 

IG Controls with UI-Altering Improvements


More Right to Left Support in Windows Forms

We started in 14.1 introducing Right to Left support in our of our editor controls. In 15.1 we expanded our right to left support to our UltraExplorerBar.

IG Windows Forms' User Voice Requested Features

We regularly turn feedback and requests from our customers, and turn it into features. This release was no different. There were a number of features from our user voice implemented in our controls this release. Below are those features that had IG TestAutomation had to specifically implement for. Have an idea for one of our products, submit it through User Voice here

  • Print Preview Dialog, Select Printer button
  • UltraGrid ColumnChooser, Select multiple columns at once

IG WPF's XamSpreadsheet Improvements

While there were several improvement’s to the XamSpreadsheet, including modifying the user functionality when workbook and worksheet protection is enabled. It was the addition of underline and hyperlink support that affected the UI. Via key commands of changing the underline format of the cell, or clicking on and activating the hyperlink of a cell.

New Bug Fixes in 2015.1

TA Product

Control

Description

Win Forms for HP

All

Trial Period Expires occurs during record or replay against CLR 2 assemblies with UFT, Windows 7 64-bit

Win Forms for IBM

UltraExplorerBar

RFT does not record properly against deeply nested ExplorerBar

WPF for HP

XamDockMananger

Controls added directly to the XamDockManager, instead of via a ContentPane were not recognized.

WPF for HP

XamDataGrid

Accessing the second filtered record throws out of index runtime error

WPF for HP

XamDataGrid

The filtering window is not recognized when Excel Style Record Filtering of th XamDataGrid is activated

WPF for HP

XamPivotGrid

The field chooser of the XamPivoGrid is not recognized

 

Download the free trial of IG TestAutomation for HP 15.1
http://www.infragistics.com/products/windows-forms-test-automation-for-hp/download

Download the free trial of IG TestAutomation WPF for HP 15.1
http://www.infragistics.com/products/wpf-test-automation-for-hp/download

Download the free trial of IG TestAutomation for IBM RFT 15.1
http://www.infragistics.com/products/windows-forms-test-automation-for-ibm/download

Voice your suggestions today, for the products of tomorrow 
http://ideas.infragistics.com/

UXify North America - Conference Videos

$
0
0

This year, instead of hosting the World IA day like we did over the last couple of years, we brought our own UX conference UXify that we've been running very successfully in Sofia (Bulgaria) to North America. 

 

 UXify logo

 

UXify was hosted on April 11 at the Infragistics world headquarters in Cranbury, NJ. Our theme was "The Future of UX Design". We had an impressive line-up of speakers who shared their experiences and inspired great discussions. For anybody who could not attend the event or who wants to watch the presentations again, here are the videos.

 

Kent Eisenhuth, Interaction Designer, Google
Living at the Intersection of Art and Science: The Future Skills of a Designer

[youtube] width="560" height="315" src="http://www.youtube.com/embed/g8P-v1fNgjc" [/youtube]

 

Justin Fraser, Sr. Project Manager, Infragistics
Managing Design Projects: A PM View

[youtube] width="560" height="315" src="http://www.youtube.com/embed/IDKlqhLRa1Q" [/youtube]

 

Sunita Vaswani, Director UX, Deutsche Bank
UX Competencies in Capital Markets

[youtube] width="560" height="315" src="http://www.youtube.com/embed/w1V0s-SIbSY" [/youtube]

 

John Chin, Experience Strategist, Verizon Wireless
Universal Design: One for all, All for one!

[youtube] width="560" height="315" src="http://www.youtube.com/embed/M3xYmeJQXpU" [/youtube]

 

Clare Cotugno, Director of Content Strategy, EPAM Empathy Lab
Content Strategy and the Project Lifecycle

[youtube] width="560" height="315" src="http://www.youtube.com/embed/Leau_mhtXnk" [/youtube]

 

Ronnie Battista, Practice Lead, Experience Strategy + Design, Slalom Consulting
Getting High on Journey Mapping

 [youtube] width="560" height="315" src="http://www.youtube.com/embed/VoKdbLLNsk8" [/youtube]

 

Lisa Woodley, VP Experience Design, NTT DATA
Surviving the Demise of Architecture: Get Strategic, Get Coding, or Get out of the Way

[youtube] width="560" height="315" src="http://www.youtube.com/embed/pVX09b8j50g" [/youtube]

 

All speakers, thank you again for the great presentations!

Next up, on June 19th and 20th we host the European version of UXify. It's a 2-day format with one day of presentations in parallel tracks and one day of workshops. For more information, see here: http://uxify.net/

Setting up an Application in Azure AD for Office 365 API Access

$
0
0

Introduction

To understand how the Office 365 API works, it might be good to explore the underlying REST API and see what happens “under the hood” to get a clear idea of the interactions between Azure AD, authentication and authorization, as well how to incorporate interaction with the Office 365 data.

In this blog post, we will look in to the configuration and setup of the Application in Microsoft Azure AD. The same will be used in the part 2 when trying out the fiddler to test the REST API.

It’s important to note that we will be working with the REST API in this blog post and NOT using the Office 365 Tools for Visual Studio client SDK.

Getting Started

The first step to get started is to login to your Microsoft Azure account and register and configure the application in the Azure Active Directory within your tenant. You will also need to set the permissions that are required for your app.

Login to Microsoft Azure with your login credentials (Office 365 login credentials), browse to the Azure AD Portal and navigate to your Azure AD account. Then click on the applications and then click "Add button" from the bottom bar in the "What do you want to do?" wizard, select "Add an application my organization is developing" and provide a name for the application.

For the example I’ll create in this post, let’s use the name "InfragisticsDemo". From here, let’s select "Web Application and/or Web API".

Click the Next button, then enter your Sign-On URL and Application ID URL.

Your application is now created and registered in your Azure AD. Within a few minutes you will be redirected to the application's page in the Azure AD, where you can edit the application-related configurations for connecting from your Mobile or Web application.

Click on the Configure tab in the application, which will display the configuration-related details of the application.

You’ll see here that there are some configuration items that are very important and interesting for the application to connect, including:

  • Client ID
  • Client Secret
  • Reply URL

The Client ID is a unique identifier for your application. You will need to use this if your application needs to access data.

The Client Secret is a key that your app will need if your app reads or writes data in Windows Azure AD, such as data that is made available through the Graph API. You can create multiple keys to address key rollover scenarios, and you can delete keys that are expired, compromised, or no longer in use. To generate these keys, select the duration. Once you save the settings, the key will be displayed only once.

The Reply URL is the physical address for your app to which Windows Azure AD will send SAML authentication tokens for authenticated users. In this scenario, we need not worry about what happens after the authentication; we only need to get the Token in Fiddler.

Below are the details of our demo application:

Client ID: ae2bae60-fc94-411e-bba0-43083e42ab1a

Reply URL: http://Infragistics.com

Client Secret Key: E/g1v+Eryn1d2cAEWsRTeb/SIajLPYv8CjQCDCr7HmY=

Now you should copy the values in your favorite text editor, because we’ll need these when we test the REST API when using Fiddler.

Configure the Office 365 Application Permissions

The next step will be to setup the application permission to enable the application access to the Office 365 data.

  1. In the bottom of the Configure screen, click the “Add application” button.
  2. In the Permissions to Other Applications dialog, select Office 365 Exchange Online and Office 365 SharePoint Online and click ok.

      3. From here, you can select the permissions that are needed for your App. The list of possible permissions include:

 

Exchange Online permissions

  • Read users' calendars
  • Have full access via EWS to users' mailboxes
  • Read users' mail
  • Read and write access to users' mail
  • Send mail as a user
  • Have full access to users' calendars
  • Read users' contacts
  • Have full access to users' contacts

 

Office 365 SharePoint Online permissions

  • Run file search queries as a user
  • Read items in all site collections
  • Edit or delete items in all site collections
  • Create or delete items and lists in all site collections
  • Have full control of all site collections
  • Read users' files
  • Edit or delete users' files

For the demo, let’s select all the permissions which we will be using in the upcoming articles as well, then click the save button which is found in the bottom bar.

This step completes the setup of the application in the Azure AD. In this blog post, we saw how to add your application to Azure AD, configure the permissions and identify the necessary properties like Client ID, Client Secret, etc. And in the next blog post, we will see how to use Fiddler to work with the raw data and work with the Office 365 Data for the above application. Stay tuned!

Webinar Recap: The Top 3 Must-Haves for a Successful Enterprise Mobility Solution

$
0
0

Today’s workforce is global and mobile. The “Bring Your Own Device” (BYOD) trend is a fairly new paradigm which has created a fast moving train coming right at IT professionals, urging them to react fast.

In our recent webinar “The Top 3 Must-Haves for a Successful Enterprise Mobility Solution”, Anand Raja, Global Solutions Consulting Manager at Infragistics and Technical Evangelist, gives out the 3 secrets that will arm you with the critical criteria to follow when choosing the most optimal EM solution to your users.

[youtube] width="560" height="315" src="http://www.youtube.com/embed/OqDy4vagaQ8" [/youtube]

Here is a sneak peak of some of the pressing enterprise mobility challenges Anand is sharing his insights on:

User Focus and UX

Since the introduction of smartphones and tablets, the way we perform everyday personal and work activities has changed substantially. The consumerization of IT, or the influence of technology, which is designed first and foremost for consumers, has left employees expecting the same fluid and connected experience across private and enterprise applications. This is why user needs are now one at the center of mobility decisions.

Security

With so many mobile devices in the field, it is no surprise that enterprise IT departments relate BYOD policies with privacy concerns and company data leakage nightmares. The challenge in front of enterprise IT is how to keep up with constant device upgrades and at the same time provide for the secure management of all connected devices (IoT), enterprise applications, content, and data, using MDM (Mobile Device Management), MAM (Mobile Application Management), and MCM (Mobile Content Management) solutions.

Webinar: The Top 3 Must-Haves for a Successful Enterprise Mobility Solution

Mobile Development & Deployment

Enterprises already have long backlogs with mobile apps waiting to be developed. CIOs are searching for ways to shorten development cycles with the help of high-productivity platforms, where little to no coding is needed. Nowadays, it is crucial to be able to innovate fast in order to stay competitive. This is where low-coding platforms, such as Infragistics’ mobile SharePoint platform come in handy in enabling enterprise innovation.

Another long-standing debate is where to deploy enterprise mobile apps – should enterprises trust external cloud providers, such as Amazon and Microsoft, or keep apps in-house? The cloud model is compelling with its benefits of flexibility and operational cost savings, but is it the right decision for every enterprise?

Big Data on the Go

Employees, LOB managers and executives need to access data and make critical decisions day-today no matter where they are. Mobile business intelligence solutions can deliver real-time business information to user devices exactly when they need it, online or offline. The possibility of always having the data you need, personalized to your way of work, and to even be able to collaborate on it with colleagues on the go, has empowered a shift in productivity we never imagined!

Mobile opportunities in front of enterprises are vast – the question is, how to decide which ones to pick? Watch Anand Raja’s webinar to learn the top 3 must-haves for a successful enterprise mobility solution here.

Looking for a comprehensive and secure mobile Office 365 and SharePoint solution, which you can customize to your preferences? Look no further. Download our SharePlus Enterprise for iOS free demo now and see the wonders it can do for your team's productivity!

SharePlus for SharePoint - Download for Android

uxcamp Copenhagen - the topics

$
0
0

Pitching your talk and listening to other amazing people

clip_image003

The #uxcampcph logo. Image attributed to: http://uxcampcph.org/Uploads/UXCampCPH_HVID_transparant.png

This is a continuation of an earlier blog about my experiences at UX Camp CPH 2015 with a focus on the topics presented there.

In a blog last week I tried to explain what lean stands for in a broader sense and to relate the concept to an event that I recently attended. UX Camp Copenhagen is a forum organized in a lean fashion and had Jeff Gothelf, the father of “Lean UX”, as its keynote speaker. In this blog I would like to share a bit more about the conference, the topics I attended and the one that I offered to the other attendants there. I will start with the Friday night to set the mood, continue with a break-the-ice session Saturday morning, followed by the attendant-generated content and end up with Jeff’s closing keynote on Lean UX.

Setting the mood

Friday night began with three invited speakers, who offered very different topics. First, Jonas Priesum from theeyetribe talked about eye-tracking, the science behind it and its related problems, such as how users might visually select items on-screen. Of course, the inevitable discussion of “blink to select” and “dwell to select” spiced up the discussion but it all ended up with a nice overview of the empowering potential of the technology for the hospitalized and the disabled.

Next it was time for Johan Knattrup to talk about the interactive movie experience that his team created using Oculus Rift, called Skammekrogen. They basically directed a 20-minute immersive movie experience that could be lived through the eyes of one of the actors through the use of a virtual reality headset. What was particularly interesting was how their initial screening of the film seemed to doom the whole concept. Movie viewers failed to feel very “immersed” in one particular character. They actually felt alienated throughout the movie when in the shoes of that particular actor. Initially the team’s understanding was that they failed to achieve immersion and all their shooting and directing efforts were in vain. But after more in-depth analysis of their script, they realized that it was actually written such that that this particular character was distant to everyone else. This, it turns out, immersed movie viewers beyond everyone’s initial expectations.

The final speaker of the night was Thomas Madsen-Mygdal, ex-chairman of podio, who spoke about belief. According to Madsen-Mygdal, belief in something is a choice and belief in the power of the Internet 20 years ago was what drove humanity forward. He also suggested that those who ultimately succeed in life are those who believe in seemingly unattainable long-term goals – particularly when the odds are against them. Perhaps the most important thing that got stuck in my mind was the notion of belief as “the most important design tool in life”.

clip_image006clip_image008

Johan Knattrup to the left and Thomas Madsen-Mygdal to the right setting the mood on Friday night. Image attributed to the author.

My take on the whole of Friday night was that I was in the right place. No matter if I were more of a researcher, or an artist, or a philosophical type of person, this was the place and the time for anyone to share anything they were passionate about, regardless of how crazy it might seem.

Breaking the ice

Saturday morning brought to us a hidden gem with Ida Aalen’s talk about The Core Model. I particularly loved the way she “killed” the homepage-first design approach by showing that most of the time we end up on a child page from a Google search or by following a link shared in social media. And if we think about it for a second she is absolutely correct; we rarely see the homepage even if we explored some of the IA of a given website. The framework that she extensively uses and promotes, called The Core Model, is definitely one of the things that I cannot wait to put into practice in my upcoming design challenges.

Talks from the people and for the people

Luckily, all who pitched talks managed to find a slot on the schedule. This highlighted the impressive efforts of the organizers because 27 of us each had thirty seconds, one after another. Once the schedule was ready, I decided to spend my first slot with Nanna and the rest of Think! Digital in a discussion about designing with and for the actual content. We spoke about the importance of getting actual content as early as possible and prototype with it instead of the “Lorem ipsum…” that is so familiar to the design world. Having content early means we decrease the probability that a piece of content will ruin our layout later in the project. Rather, the content becomes a design constraint known from the very beginning.

My second slot was spent with Pascal from ustwo in London. It was probably the most anticipated talk of the day after an amazing pitch and he definitely kept his promise. Pascal spoke about the digital identity that we create through all our gadgets, how they quantify us and the implications of this journal of our life (e.g., ownership, privacy and longevity) as these journals are very likely to outlive us.

The third session on my list was with Steven from Channel 4, another speaker from the UK. He talked about their design process, involving experience maps and user journeys, taking as a case study the launch of his company’s “On Demand” product.

Doing my part

At the end of the day it was time for the talk that I had prepared: “Designing Usable Dashboards”. I picked that topic for two reasons. Firstly, we at Infragistics know how to design usable dashboards. We have demonstrated that on a number of occasions such as the Xamarin Auto Sales Dashboard, Marketing Dashboard, Finance Stocks App, and CashFlow Dashboard to note just some of our latest work. Secondly, I was really inspired by the webinar, How To Design Effective Dashboards, recently presented by Infragistics Director of Design, Tobias Komischke. Despite the fact that my slides had a researcher’s approach to data visualization, the lengthy discussion at the end of the talk left me with the feeling that it quenched the thirst of the crowd for the topic.

clip_image013

Designing Usable Dashboards presentation by the author. Image attributed to the author.

The icing on the cake

There was only one thing standing between us and the beer in the bar, signifying the end of such community-driven forums. It was what turned out to be inarguably the best talk of the whole event – Jeff Gothelf and Lean UX. Originally from New Jersey, where Infragistics’ headquarters are located, he shared his struggle to create a design team in a NYC startup. A team that had to work with the agile software development already established in the company. Jeff shared the ups and downs along the way, and the birth of what he eventually coined “the Lean UX approach”. He spoke about continuous feedback loops, conversations as a core communication medium and the importance of learning and change. He also spoke about how it is crucial to learn whether your assumptions are valid by testing a hypothesis with minimal effort, as quickly as possible. And that once you are better informed, you have to be willing to change and iterate to progress your product forward.

clip_image015

Jeff Gothelf talking about lean UX. Image attributed to the author.

UX Camp Copenhagen, thank you once again for the great event and it was really a pleasure to be part of it. Hope to see you again next year.


Bar Charts versus Dot Plots

$
0
0

Bar charts have a distinctadvantage over chart forms that require area or angle judgements. That's because the simple perceptual tasks we require for decoding a bar chart - judging lengths and/or position along a scale - are tasks we're good at. But we also decode dot plots through judging position along a scale. Is there a reason to choose one over the other?

To explore this question I'm going to create several bar charts and dot plots from a real-world dataset. Specifically we'll be looking at the World Health Organization (WHO) table of life expectancy by country. It covers three different years: 1990, 2000, and 2012 and we'll just look at the life expectancy at birth across both sexes combined. Data is rounded to the nearest whole year.

Let's start by looking at the increase in life expectancy between 1990 and 2012 for 12 of the G-20 nations.

Which chart is better? With the bar chart you can compare lengths as well as position, but if you're an ardent disciple of Edward Tufte then the dot plot has the better data-ink ratio. In addition, one could always change the lines in the dot plot so that they only go from 0 to the position of the dot if one wanted to judge based on length. In the end, I think in this simple case it's probably just a matter of personal preference.

What if, instead of looking at the difference between 2012 and 1990 for each country, we just wanted to show the two corresponding values? In the bar chart case we create a grouped bar chart, in the dot plot case we string two different symbols on each line.

It's easy to compare the two bars from the same country, but if we want to compare across countries for the same year we must ignore the presence of half the bars. Because these bars provide quite a dense concentration of color, this isn't all that easy a task. With the dot plot, comparison for the same country is even easier - we just scan along the same horizontal line. I think comparison between countries for the same year is also simpler, there's no large blocks of color to distract us when we want to compare blue circles to other blue circles or red squares to other red squares.

That covers the most obvious decoding tasks, but can we extract any other insights? I think it's immediately apparent from the dot plot that Turkey has seen the biggest increase in life expectancy (as was obvious when directly plotted in the first example). With the grouped bar chart, that information is there but it is somewhat concealed. Similarly, I think that the fact that the life expectancy in India in 2012 was lower than for most of the listed countries in 1990 is more obvious in the dot plot.

Let's add the middle year of measurement to the chart and see what difference that makes.

Now things look a bit cramped. In the case of the dot plot, for example, there is an overlap between the marker for the year 2000 and one of the other two years in eight of the twelve cases. But we can change things with the dot plot more than we can the bar chart. Assuming we're restricted to the same horizontal and vertical space as above, about the only thing we can do with the bar chart is change the horizontal scale so its maximum coincides with the maximum in the data. But with the dot plot, because line length does not encode anything, we can expand our scale in both horizontal directions to whatever is convenient.

Things are much clearer now in the dot plot while the bar chart is barely any different.

The above discussion gives several reasons for favoring a dot plot over a bar chart. The dataset used is, however, quite well-behaved. Specifically, for each country the life expectancy increased from 1990 to 2000 to 2012. This was not universally the case across the globe. In fact if I'd picked a different sample of twelve countries from the G-20, like the one below, our dataset would not have been so well-behaved.

In the case of South Africa and Russia we have overplotting in the dot plot. That's a problem we can probably deal with. We could use semi-transparent points, for example. The bars of a grouped bar chart do not lie on the same line and so overplotting will never be an issue.

Software Design & Development Conference

$
0
0

I'm heading out tomorrow night to attend and speak at Software Design & Development which is a yearly conference in London, UK. My talk is on May 14th and is called "Assessing UX" and provides a 360-degree view on what the different dimensions of user experience are, and what concrete things you want to look for when assessing these dimensions. Free tools are presented that help to check concepts and products for their UX quality. I involve the audience in a live usability test demonstration as well as a 5-minute Q&A period right at the end of the presentation. Should be fun!

UXify Animating Name Badges

$
0
0

In case you missed out on UXify 2015 last month, check out the recent Infragistics blog UXify North America – Conference Videos for all 8 presentations covering “The Future of UX Design”.

UXifyNameBadge

In addition to an afternoon of free lectures, conference goers also received interactive animating name badges. At first glance, the name badge appears to be the attendee’s name printed on a card along with an abstract design. But with the addition of a second transparent card overlaying the image, the design comes to life.

The name badge uses a method of animation known as “scanimation”. A six frame animation is combined into a single abstract image. By moving a striped acetate overlay across the image, the viewer is only able to see one frame at a time. As the frames are quickly strung together, the once static image creates the illusion of movement.

Try the animation for yourself using this interactive prototype:http://indigodesigned.com/share/7qn4datqwwqu

Interested in sharing your own prototypes? Check out the all new platform for sharing Indigo Studio prototypes: IndigoDesigned.com

NucliOS Release Notes - May: 14.2.331, 15.1.70 Service Release

$
0
0

Introduction

With every release comes a set of release notes that reflects the state of resolved bugs and new additions from the previous release. You’ll find the notes useful to help determine the resolution of existing issues from a past release and as a means of determining where to test your applications when upgrading from one version to the next.

Release Notes: NucliOS 2014 Volume 2 Build 331 and NucliOS 2015 Volume 1 Build 70

ComponentProduct ImpactDescriptionService Release
IGChartViewBug Fix

The first and last points are cropped in the OHLC and Candlestick series.

Note: Added useClusteringMode to category axis. Setting this property to true will stop cutting off half of the first and last data points in financial price series.

14.2.331, 15.1.70
IGSparklineViewBug Fix

Sparkline as a line is closing its geometry path.

Note: Fixed line-type sparkline rendering a filled polygon instead of a polyline.

14.2.331, 15.1.70

By Torrey Betts

How to use AngularJS in ASP.NET MVC and Entity Framework

$
0
0

These days, it seems like everyone is talking about AngularJS and ASP.NET MVC. So in this post we will learn how to combine the best of both worlds and use the goodness of AngularJS in ASP.NET MVC by demonstrating how to use AngularJS in an ASP.NET MVC application. Later in the post, we will see how to access data using the Entity Framework database as a first approach, then we’ll explore how to access the data in AngularJS and then pass to the view using the controller. In short, this post will touch upon:

·         adding an AngularJS library in ASP.NET MVC;

·         reference of AngularJS, bundling and minification;

·         fetching data using the Entity Framework database;

·         returning JSON data from an ASP.NET controller;

·         consuming JSON data in an AngularJS service;

·         using AngularJS service in AngularJS controller to pass data to the view; and

·         rendering data on an AngularJS View

To start, let’s create ASP.NET MVC application and right click on the MVC project. From the context menu, click on Manage Nuget Package. Search for the AngularJS package and install into the project.

 

After successfully adding the AnngularJS library, you can find those files inside the Scripts folders.

Reference of AngularJS library

You have two options to add an AngularJS library reference in the project: MVC minification and bundling or by adding AngularJS in the Script section of an individual view. If you use bundling, then AngularJS will be available in the whole project. However you have the option to use AngularJS on a particular view as well.

Let’s say you want to use AngularJS on a particular view (Index.cshtml) of the Home controller. First you need to refer to the AngularJS library inside the scripts section as shown below:

@section scripts{

    <scriptsrc="~/Scripts/angular.js">script>

}

 

Next, apply the ng-app directive and any other required directives on the HTML element as shown below:

<divng-app=""class="row">

     <inputtype="text"ng-model="name"/>

     {{name}}

div>

 

When you run the application you will find AngularJS is up and running in the Index view. In this approach you will not be able to use AngularJS on the other views because the AngularJS library is only referenced in the Index view.

You may have a requirement to use AngularJS in the whole MVC application. In this case, it’s better to use bundling and minification of MVC and all the AngularJS libraries at the layout level. To do this, open BundleConfig.cs from the App_Start folder and add a bundle for the AngularJS library as shown below:

 

  publicstaticvoid RegisterBundles(BundleCollection bundles)

        {

            bundles.Add(newScriptBundle("~/bundles/angular").Include(

                        "~/Scripts/angular.js"));

 

After adding the bundle in the BundleConfig file, next you need to add the AngularJS bundle in the _Layout.cshtml as listed below:

<head>

    <metacharset="utf-8"/>

    <metaname="viewport"content="width=device-width, initial-scale=1.0">

    <title>@ViewBag.Title - My ASP.NET Applicationtitle>

    @Styles.Render("~/Content/css")

    @Scripts.Render("~/bundles/modernizr")

    @Scripts.Render("~/bundles/angular")

    @Scripts.Render("~/bundles/jquery")

    @Scripts.Render("~/bundles/bootstrap")

    @RenderSection("scripts", required: false)

head>

 

After creating an AngularJS bundle and referring to it in _Layout.cshtml, you should be able to use AngularJS in the entire application.

 

Data from Database and in the AngularJS

So far we have seen how to set up AngularJS at a particular view level and the entire application level. Now let’s go ahead and create an end to end MVC application in which we will do the following tasks:

1.       Fetch data from the database using the EF database first approach

2.       Return JSON from the MVC controller

3.       Create an AngularJS service to fetch data using the $http

4.       Create an AngularJS controller

5.       Create an AngularJS view on the MVC view to display data in the table

Connect to a database using the EF database first approach

To connect to a database with the EF database-first approach, right click on the MVC application and select a new item. From the data tab, you need to select the option ADO.NET Entity Model as shown in the image below:

 

From the next screen, select the “EF Designer from database” option.

 

On the next screen, click on the New Connection option. To create a new connection to the database:

1.       Provide the database server name

2.       Choose the database from the drop down. Here we are working with the “School” database, so we’ve selected that from the drop down.

 

 

 

On the next screen, leave the default name of the connection string and click next.

 

On the next screen, select the tables and other entities you want to keep as the part of the model. To keep it simple I am using only the “Person” table in the model.

 

As of now we have created the connection with the database and a model has been added to the project. You should see an edmx file has been added as part of the project.

 

Return JSON from the MVC controller

To return the Person data as JSON, let us go ahead and add an action in the controller with the return type JsonResult. Keep in mind that you can easily write a Web API to return JSON data; however the purpose of this post is to show you how to work with AngularJS, so I’ll stick with the simplest option, which is creating an action which returns JSON data:

publicJsonResult GetPesrons()

        {

            SchoolEntities e = newSchoolEntities();

            var result = e.People.ToList();

            return Json(result, JsonRequestBehavior.AllowGet);

 

        }

 

Create an AngularJS service to fetch data using the $http

Here I assume that you already have some knowledge about these AngularJS terms, but here’s a quick review/intro of the key concepts:

Controller

A controller is the JavaScript constructor function which contains data and business logic. The controller and the view talk to each other using the $scope object. Each time a controller is used on the view, an instance gets created. So if we use it 10 times, 10 instances of the constructor will get created. 

Service

A service is the JavaScript function by which an instance gets created once per application life cycle. Anything shared across the controller should be part of the service. A service can be created in five different ways. The most popular way is by using the service method or the factory method. AngularJS provides many built in services also: for example, the $http service can be used to call a HTTP based service from an Angular app, but a service must be injected before it is used.

Modules

Modules are the JavaScript functions which contain other functions like a service or a controller. There should be at least one module per Angular app.

Note: These are the simplest definitions of these AngularJS concepts. You can find more in depth information on the web.

Now let’s start creating the module! First, right-click on the project and add a JavaScript file. You can call it anything you’d like, but in this example, let’s call it StudentClient.js.

In the StudentClient.js we have created a module and a simple controller. Later we will modify the controller to fetch the data from the MVC action.

var StudentApp = angular.module('StudentApp', [])

 

StudentApp.controller('StudentController', function ($scope) {

 

    $scope.message = "Infrgistics";

 

});

 

To use the module and the controller on the view, first you need to add the reference of the StudentClient.js and then set the value of ng-app directive to the module name StudentApp. Here’s how you do that:

@section scripts{

  

     <scriptsrc="~/StudentClient.js">script>

}

<divng-app="StudentApp"class="row">

    <divng-controller="StudentController">

        {{message}}

    div>

div>

 

At this point if you run the application, you will find Infragistics rendered on the view. Let’s start with creating the service. We will create the custom service using the factory method. In the service, using the $http in-built service will call the action method of the controller.  Here we’re putting the service in the same StudentService.js file.

StudentApp.factory('StudentService', ['$http', function ($http) {

 

    var StudentService = {};

    StudentService.getStudents = function () {

        return $http.get('/Home/GetPersons');

    };

    return StudentService;

 

}]); 

 

Once the service is created, next you need to create the controller. In the controller we will use the custom service and assign returned data to the $scope object. Let’s see how to create the controller in the code below:

StudentApp.controller('StudentController', function ($scope, StudentService) {

 

    getStudents();

    function getStudents() {

        StudentService.getStudents()

            .success(function (studs) {

                $scope.students = studs;

                console.log($scope.students);

            })

            .error(function (error) {

                $scope.status = 'Unable to load customer data: ' + error.message;

                console.log($scope.status);

            });

    }

});

 

Here we’ve created the controller, service, and module. Putting everything together, the StudentClient.js file should look like this:

var StudentApp = angular.module('StudentApp', []);

StudentApp.controller('StudentController', function ($scope, StudentService) {

 

    getStudents();

    function getStudents() {

        StudentService.getStudents()

            .success(function (studs) {

                $scope.students = studs;

                console.log($scope.students);

            })

            .error(function (error) {

                $scope.status = 'Unable to load customer data: ' + error.message;

                console.log($scope.status);

            });

    }

});

 

StudentApp.factory('StudentService', ['$http', function ($http) {

 

    var StudentService = {};

    StudentService.getStudents = function () {

        return $http.get('/Home/GetPersons');

    };

    return StudentService;

 

}]);

 

On the view, we can use the controller as shown below, but keep in mind that we are creating an AngularJS view on the Index.cshtml. The view can be created as shown below:

 

@section scripts{

  

     <scriptsrc="~/StudentClient.js">script>

}

<divng-app="StudentApp"class="container">

    <br/>

    <br/>

    <inputtype="text"placeholder="Search Student"ng-model="searchStudent"/>

    <br/>

    <divng-controller="StudentController">

        <tableclass="table">

            <trng-repeat="r in students | filter : searchStudent">

                <td>{{r.PersonID}}td>

                <td>{{r.FirstName}}td>

                <td>{{r.LastName}}td>

            tr>

        table>

    div>

div>

 

On the view, we are using ng-app, ng-controller, ng-repeat, and ng-model directives, along with “filter” to filter the table on the input entered in the textbox. Essentially these are the steps required to work with AngularJS in ASP.NET MVC application.

 

Conclusion

In this post we focused on a few simple but important steps to work with AngularJS and ASP.NET MVC together. We also touched upon the basic definitions of some key AngularJS components, the EF database-first approach, and MVC. In further posts we will go into more depth on these topics, but I hope this post will help you in getting started with AngularJS in ASP.NET MVC. Thanks for reading!

Top Enterprise Mobility Events in 2015

$
0
0

Paul Carter, CEO of Global Wireless Solutions, hit the nail on the head when he said “we are officially living in a mobile-first world”. The extent to which mobile plays a role in our lives is difficult to measure. Maybe the simplest way to gain an understanding of its magnitude is by asking yourself, ‘when was the last time I went a day without using my mobile?’

If you’re honest, it’s probably only a few hours at most. And while browsing social networks and checking emails may be the most popular and humblest form of interaction, the role that mobile plays in our lives has much more significance and substance. Did you see the latest news? Check CNN. Want to know how your stock is performing? Browse the markets. Forgot to get that important birthday present? ‘Click and collect’ will save the day.

Never has information been so readily available and easy to access. This not only relates to our personal lives but has a distinct impact on the enterprise and how we all work. We can prepare a presentation, edit a document, chat to colleagues remotely, save that new idea, set up a meeting and much more. At Infragistics we know the importance that mobile as a whole - be it apps, UX, platforms, wearables, design, testing etc. - has in today's’ world. As a result we like to keep up to speed with everything going on in our community.

And with events often leading the way with new ideas, latest news and innovative thinking, we wanted to share with you 4 enterprise mobility events that we think you should look out for in 2015.

The Mobile Show Middle East 2015

12 - 13 May, Dubai

One of the biggest events on our list, The Mobile Show Middle East is “Where leaders and pioneers of mobile technology meet to explore radical new ideas”. There’s a number of topics on the agenda, from ‘Apps and Content’, ‘Platforms and Devices’ to pure ‘Enterprise Mobility’ and ‘Infrastructure and Security’. Focusing ‘on everything the mobile industry needs to know’, this 2-day conference is aimed at developers, device manufacturers, Heads of, regulators, digital marketers, mobile consultants and more.

The stats are pretty impressive too. With over 10,000 attendees, 250 exhibitors, 100 VIPs from telcos, enterprise and government lined up and an estimated 300 facilitated buyer sessions, it’s sure to be a great event which aims to help those attending discover the latest in mobile solutions which can benefit their businesses. For more information check out their site.

Apps World, North America

12 - 13 May, San Francisco

The Apps World conference in California covers one of the largest growing industries - and one we know a lot about - mobile apps. Mobile usage has overtaken desktop usage and these numbers continue to rise. Known as a must attend conference for app developers, the event provides an opportunity to meet over 10,000 ‘leading developers, brands and industry professionals from across the entire app ecosystem’.

This event is huge and has a mighty impressive speaker line up. In fact you’d be hard pushed to find one better. From the Co-founder of Twitter, Chief Evangelist of Microsoft, Lead Android UI designer at Google, CEO of OneNote, Chief Digital Officer of the NFL and Senior Director of Nike (to name a few), attendees will hear from some of the very best that the industry has to offer.

The Enterprise Mobility Forum

14 - 15 May, South Africa

Taking place at the luxurious Arabella Hotel and Spa just outside of Cape Town, the Enterprise Mobility Forum is aimed at senior executives and decision makers and is strictly invite only. Attendees are treated to five themes over two days - ‘Managing and Securing the Mobile Enterprise’, ‘Aligning Strategies to Business Objectives’, ‘Mobile Applications, Platforms and Services’, ‘Sub-Saharan Africa: Connected and Mobile’ and ‘Enterprise Mobility: Looking to the Future’.

With Microsoft as the platinum sponsor you can expect to see and hear from a range of top level management from Barclays, Investec, Microsoft, the Johannesburg Stock Exchange, SAP, HP and more. Since its inaugural conference there’s been a consistent rise in forum attendees and leading vendors, highlighting that Africa’s premium enterprise mobility event is one to watch. And with the world's second largest continent playing a prominent role in this year’s 20 fastest growing economies, it’s set to keep growing.

Enterprise Mobility Management

18 June, London

EMM 2015 is the “UK’s leading enterprise mobility management event for business and technology professionals”. Now in its fourth year, the event will cover a whole host of hot topics from collaborative working and Mobile Applications Management (MAM) to wearable tech in the workplace and mobile big data. A particular focus this year will be on the increase of BYOD. As research has highlighted, ‘the BYOD market size is set to grow to over $284 billion by 2019’. It’s also estimated that by 2017 half of all employers will require employees to supply their own device for work purposes.

Featuring client use cases and case studies the emphasis is very much on real-life scenarios, sharing best practices and providing practical business advice. So if you’re a CEO, CIO, Director of, Enterprise Architect, BYOD Manager or Risk Analyst or Specialist, then this event in London is one for you.

Looking for a comprehensive and secure mobile Office 365 and SharePoint solution, which you can customize to your preferences? Look no further. Download our SharePlus Enterprise for iOS free demo now and see the wonders it can do for your team's productivity!

SharePlus for SharePoint - Download for Android

Top 10 features of VS in 2015

$
0
0

It’s no secret among developers that there is no better development environment than Microsoft Visual Studio. It offers the most complete set of tools to create powerful windows-, web- or any other application and can be done in almost any common language. Visual Studio is available in a version that fits every developer’s needs. Recently Microsoft announced the new Visual Studio 2015 product line, including the new Visual Studio Enterprise with MSDN, Visual Studio Professional with MSDN and the free Visual Studio Community edition.

The Visual Studio Community edition is a free version that has the same capabilities as the professional edition. Any developer can download this version and use it in an academic environment or in a team with no more than 5 developers.

In this post we will take a look at some of the top features in the newest edition of Visual Studio.

1. UI Debugging Tools for XAML

Visual Studio is often used to develop WPF applications and these applications are built with XAML. Two new tools have been added in the new version to inspect the visual tree of running WPF applications, as well as the properties on the elements in the tree. These tools are Live Visual Tree and Live Property Explorer. By using these tools you will be able to select any element and see the final, computed and rendered properties. In a future update these tools will also support Windows Store apps.

2. Single Sign-in

As developers today are using more and more cloud services like Azure for data storage, Visual Studio Online as code repository or the app store to publish the application, they needed to sign-in for each cloud service. In the latest release the authentication prompts are reduced and many cloud services will support single sign-on from now on, which is a much welcome feature!

3. CodeLens

CodeLens, an already existing tool in the previous versions, is used to find out more about your code while you keep working in the editor. The CTP 6 release enables CodeLens to visualize the code history of you C++, SQL or JavaScript files versioned in Git repositories by using the file-level indicators. When using work items in TFS the file-level indicators will also show the associated work items.

4. Code Map

Code Map is a tool that will visualize the specific dependencies in the application code. The tool enables you to navigate the relationships by using the map. This map helps the developer to keep track of their current position in the code while working. In addition to some performance improvements, there are some other new features in the Code Map tool such as filtering, external dependency links, improved top-down diagrams and link filtering.

5. Diagnostics Tools

In the new release the Diagnostic Tools debugger now supports 64-bit Windows Store apps and the timeline now zooms as necessary so the most recent break event is always visible.

6. JavaScript editor

JavaScript is the language of the future so in the CTP 6 release there are also a few improvements including:

 

  • Task list support. You can add a //TODO comment in your JavaScript code which will result in a new task being created in the task list.
  • Object literal IntelliSense. The JavaScript editor now has IntelliSense suggestions when passing an object literal to functions documented using JSDoc.

 

7. Unit tests

In the Visual Studio 2015 Preview version the Smart Unit Tests were introduced. This generated test data and a suite of unit tests by exploring the code. In the CTP 6 you can now take advantage of parameterized Unit Tests and Test stubs creation via the context menu.

8. Visual Studio Emulator for Android

As Visual Studio is no longer a tool used only for developing Windows applications, the CTP 6 version adds an improved emulator for Android with OpenGL ES, Android 5.0, Camera interaction and multi-touch support.

9. Visual Studio Tools for Apache Cordova

The latest release of Visual Studio not only offers support for debugging Android, iOS and Windows Store application but now adds the debugging support for Apache Cordova apps that target Windows Phone 8.1.

10. ASP.NET

The CTP 6 release add some new features and performance improvements for ASP.NET developers like:

 

  • Run and debug settings that can be customized by editing the debugSetting.json file
  • The ability to add a reference to a system assembly
  • Improved IntelliSense while editing project.json
  • A new Web API template
  • The ability to use PowerShell to publish the ASP.NET 5 application
  • Lambda expressions in the debugger watch windows

 

Continuous improvements

Here at Infragistics we’re constantly impressed by Visual Studio because of the continuous improvements and new features if offers to any developer. If you want to get stuck in and take a look at the new features and updates then you can start immediately by downloading the CTP 6 release here. Have fun!

If you are looking for the fastest grid on the market, this is your place. Download our Developer toolkit and test it now!


MVVM: Data Binding Rich Text to the Infragistics XamRichTextEditor

$
0
0

The Infragistics xamRichTextEditor control is a highly customizable rich text editing control that provides functionality modeled after the features and behavior of Microsoft Word.  You can easily create and edit Microsoft Word documents using the xamRichTextEditor.  Here’s the thing though… not every app that uses rich text uses Word, or even deals with a document at all.  Sometimes you just have a string stored in a database somewhere that holds all the rich text information as RTF or even HTML.  So, if you’re using MVVM, and populating a property with this string of rich text data, how do you data bind it to the xamRichTextEditor control?  Easy!  Use a document adapter.

Binding to Visual Elements

Let’s say that I have a xamRichTextEditor control and I want to data bind the rich text being generated by the control to another element in my view.  Let’s say a TextBlock.

<Grid>
    <Grid.ColumnDefinitions>
        <ColumnDefinition/>
        <ColumnDefinition/>
    </Grid.ColumnDefinitions>
    
    <ig:XamRichTextEditor x:Name="_rte" Grid.Column="0" />
    
    <TextBlock Grid.Column="1" />
</Grid>

Your first thought might be to data bind directly to a property of the xamRichTextEditor.  Well, you would be wrong.  We actually need to use a “middle man” called a document adapter.  Since the xamRichTextEditor supports PlainText, HTML, and RTF formats, you’ll want to choose which format you need.  Heck, you may want to support all of them.  That’s fine, no problem.  Either way, you need to add a reference to the document format you will be using.  I will be using HTML in this post.

xamRichTextEditor document formats

Once you have added the formats you need, we need can create a document adapter in XAML.  In this case, I’ll be using the HtmlDocumentAdapter.

      
      <ig:HtmlDocumentAdapter x:Name="_html" Document="{Binding ElementName=_rte, Path=Document}" />

As you can see, when I defined the HtmlDocumentAdapter, I data bound the Document property to the xamRichTextEditor.Document property.  This is how you make the connection between the two controls.  Now the next step is to data bind the Text property of the TextBlock in our sample to the HtmlDocumentAdapter so that we can visualize the HTML being generated as we create rich text in the xamRichTextEditor.

      
      <TextBlock Grid.Column="1" Text="{Binding ElementName=_html, Path=Value}" />

That’s it!  We are now data bound.  Run the application and let’s see what we get.

image

Perfect!  The HTML generated by the xamRichTextEditor is data bound and being rendered by the TextBlock control.  If you started typing into the xamRichTextEditor, you will notice that the HTML isn’t updated as you type.  This is because by default, the source doesn’t update until the control has lost focus.  Now, you may think, “oh, I’ll just use the UpdateSourceTrigger on the binding to have it update on any key stroke”.  Well, once again you, would be wrong!  You actually have to use a property that exists on the document adapter called RefreshTrigger.

image

You will notice four options; Delayed, ContentChanged, Explicit, and LostFocus.  Half of those are self-explanatory.  ContentChanged is like property changed.  It will update the Value every time the content in the xamRichTextEditor is updated.  For very large documents, this could cause some performance issues.  When using Delayed, you have two additional properties to help control the behavior of the update; DelayAfterFirstEdit and DelayAfterLastEdit.

DelayAfterFirstEdit is a timespan that allows you to define how long to wait after you first start typing in the xamRichTextEditor to update the binding.

DelayAfterLastEdit is a time span that allows you to define how long to wait after you stop typing in the xamRichTextEditor to update the binding.

You can even use them together if you like.


<ig:HtmlDocumentAdapter x:Name="_html" Document="{Binding ElementName=_rte, Path=Document}"
                        RefreshTrigger="Delayed"
                        DelayAfterFirstEdit="00:00:02:00"
                        DelayAfterLastEdit="00:00:02:00" />

Here is our final XAML to create the data binding between the xamRichTextEditor, and the TextBlock.

<Grid>
    <Grid.ColumnDefinitions>
        <ColumnDefinition/>
        <ColumnDefinition/>
    </Grid.ColumnDefinitions>
    
    <ig:HtmlDocumentAdapter x:Name="_html" Document="{Binding ElementName=_rte, Path=Document}"
                            RefreshTrigger="Delayed"
                            DelayAfterFirstEdit="00:00:02:00"
                            DelayAfterLastEdit="00:00:02:00" />
    
    <ig:XamRichTextEditor x:Name="_rte" Grid.Column="0" />
    
    <TextBlock Grid.Column="1" Text="{Binding ElementName=_html, Path=Value}" />
</Grid>

 

Binding to a Property in a ViewModel

So what if we want to data bind to a property in our ViewModel.  We are using MVVM after all!  Well that would require just a slight modification.  Let’s say I have a ViewModel that looks like this:

publicclassMainWindowViewModel : INotifyPropertyChanged
{
    privatestring _htmlText;
    publicstring HtmlText
    {
        get { return _htmlText; }
        set
        {
            _htmlText = value;
            OnPropertyChanged();
        }
    }

    publiceventPropertyChangedEventHandler PropertyChanged;
    protectedvirtualvoid OnPropertyChanged([CallerMemberName] string propertyName = null)
    {
        var handler = PropertyChanged;
        if (handler != null) handler(this, newPropertyChangedEventArgs(propertyName));
    }
}

Well, with a small modification to our XAML, we can create a binding between the xamRichTextEditor, and the property on underlying ViewModel.

<ig:HtmlDocumentAdapter x:Name="_html" Document="{Binding ElementName=_rte, Path=Document}"
                        RefreshTrigger="Delayed"
                        DelayAfterFirstEdit="00:00:02:00"
                        DelayAfterLastEdit="00:00:02:00"
                        Value="{Binding HtmlText}"/>

Assuming your View’s DataContext is properly set, like mine is; this would now update your property depending on your RefreshTrigger settings.  You can now serialize this property any way you like.

Be sure to check out the source code, and start playing with it.  As always, feel free contact me on my blog, connect with me on Twitter (@brianlagunas), or leave a comment below for any questions or comments you may have.

Improving Your Craft with Static Analysis

$
0
0

These days, I make part of my living doing what's called "software craftsmanship coaching."  Loosely described, this means that I spend time with teams, helping them develop and sustain ways to write cleaner code.  It involves introduction to things like the SOLID Principles, design patterns, DRY code, pair programming, and, of course, automated testing and test driven development (TDD).  I've spent a lot of time contemplating these subjects and their economic value to organizations, even up to the point of creating a course for Pluralsight.com about this very thing.  And through this contemplation, I've come to realize that TDD is an extraordinarily nuanced practice, both in terms of advantages offered and challenges presented.

This post is not about TDD, so what I'd like to do is zoom in on one particular benefit offered by the practice.  It's a benefit that tends to be overlooked beside the regression suite that it generates and the loosely coupled design that it encourages.  But one of the important things that TDD does is to provide a very tight, automated feedback loop.  Consider what generally happens if you're working on a web application and you want to evaluate the effects of your most recent changes to the code base.  You build the code and then run it, and running it is generally accomplished by deploying it to some local version of a web server and then starting the web server.  Once the web server and your web application are running, you then engage the GUI and navigate to wherever it is that will trigger your code to be run.  Only at this point do you get feedback about what you've done.  TDD short-circuits this process by requiring only build and execution of a test suite.

Of course, TDD isn't the only way to create a tight feedback loop, but it is a well-recognized one.  And it's also one that tends to spoil you.  After becoming used to TDD, it's hard to go back to waiting for long cycle times between writing code and seeing the results.  In fact, it tends to go the other way and you find yourself chasing other means of obtaining fast, automated feedback.  It was this exact dynamic that got me hooked on the idea of static code analysis.  If I could get quick feedback from unit tests about whether my code worked, why couldn't I get feedback about whether it was well written?

A Code Quality Feedback Loop?

Now, "well written" inherently invites a great deal of subjectivity, and it's not as though there is any universal agreement, even in a given language, as to what properties of code are ideal.  But there are some pretty well established trends that get pretty wide agreement.  It is preferable not to write classes and methods that are overly large or complex.  It is preferable not to create modules that are too tightly coupled or needlessly interdependent.  And, speaking of dependencies, it's better not to create cycles.  It's pretty easy to argue that inheritance hierarchies shouldn't be too deep, method parameter rosters shouldn't be too long, and classes shouldn't be too overrun with methods.

But factoring all of these things and more into the mix, it gets sort of hard to keep track of it all.  I mean, it's easy enough to be in the middle of some monster 4000 line method and think, "man, this method is waaay too big," but it can be harder to notice when you're adding a few lines to a method that may already be marginally too long.  After all, it's not necessarily at the forefront of your mind since you're probably in their chasing some infuriating bug.

Before giving up hope, though, consider things with which you may be more familiar, such as test coverage tools and compiler warnings.  You can deliver code with minimal test coverage or even with boatloads of compiler warnings, but there's a nagging pull not to so.  Call it gamification or perfectionism or whatever you like, but it's there, even if you don't always obey it.  There's a pressure to fix these issues because they're constantly there, in your face.  They're part of a pretty tight feedback loop for you.

So I encourage you to add static analysis tools into your feedback loop.  I'm not really talking about the kinds of tools that alert you if you're not following the team's coding standards (go nuts with this if you want).  Rather, I'm referring to the kinds of tools that show you things about your code like line count in methods, cyclomatic complexity, number of methods in a class, and class cohesion.  Set up tools that warn you when these things are running afoul of what they generally look like in "clean code."

What you're going to get out of this is not the bullet-proof, "one true way" to do things.  Life isn't that simple, people who tell you it is are selling you a false bill of goods.  What you're going to get out of it is a growing understanding of architectural tradeoffs buried within the code that you write.  The static analysis tool serves the same purpose as the rumble strips on highways by jolting you whenever you're venturing beyond what may be considered standard usage.  Sure, there might be reasons to veer onto the shoulder in certain odd circumstances, but usually you've just drifted over there due to inattentiveness.  Well, not anymore you won't.

If you're skeptical, just install such a tool and see what you think.  See what it says about your code, but don't take any action one way or another if you're not comfortable with it.  If you disagree with it, do some research and try to formulate an argument as to why.  I'm not advocating that you revisit all of your programming decisions to achieve a number that some tool says you should have.  I'm advocating that you make yourself aware of these numbers and the concepts that drive them so that you can have intelligent conversations about them and make informed decisions.  And I'm advocating that you do this with a fast feedback loop, safely in the comfort of your own IDE.

The quick feedback here is the best part of all.  The static analysis tools are just executed algorithms.  You're not submitting to peers for a code review or putting your code on the internet and being blasted by mean-spirited trolls.  You're just helping yourself to some automated feedback with the understanding that you can keep helping yourself to it whenever you want.  After enough time with this approach, you'll be prepared for the arguments that actual trolls and critics might offer up.  And, hey, you might just learn some things and change some habits in ways that make you happy.

Developer News - What's IN with the Infragistics Community? (5/11-5/17)

Objects in JavaScript for .NET developers – Part 1

$
0
0

 

Here are some fun facts for you: JavaScript is not an object oriented language, but almost everything in JavaScript is an object. JavaScript does not have classes, and we can create an object from an object. A function can be used as a constructor, and returns a newly created object. Every object in JavaScript contains a second object called a prototype object.

If you’re coming from a .NET background, the sentences you just read probably don’t make any sense. But these are all true statements about JavaScript. And in this post we will focus on different ways to create objects in BLOCKED SCRIPT

1.       Object as literal

2.       Creating an object using the new operator and constructors

3.       Creating an object using the Object.create() static method

 

Object creation as literal

The simplest way to create an object is by creating an object using the object literal.  We can create a simple object as shown in the listing below:

 

var foo = {};

foo.prop = "noo";

console.log(foo.prop);

var rectangle = { height: 20, width: 30 };

console.log(rectangle.height);

rectangle.height = 30;

console.log(rectangle.height);

 

In the above listing, we have created two objects:

1.       Object foo does not contain any properties

2.       Object rectangle contains two properties: height and width.

3.       Properties can be added to an object after creation too. In the above listing, when object foo was created it did not have any properties, so we added a property named “prop” in the foo object.

4.       The value of the properties can be modified after the object creation. In the above listing we modified the height property.

We can create a complex object as the object literal as well. Let us say we want to create a student object. This should contain the following properties:

1.       Name

2.       Age

3.       Subject

4.       Parents – another object literal with its own properties like name and age.

The complex student object can be created as shown:

var student = {

    name: "David",

    age: 20,

    parents: {

        name: 'Mark',

        age: 58

    }

};

 

var studentparentage = student.parents.age;

console.log(studentparentage);

 

As you notice in the above listing, the parents property is an object itself with its own properties. We can add, remove, and access the properties of the object property in the same way we would with the object.

A single object literal creates as many new objects as it appears or being used in the code. To understand this better, let’s take a look at this code snippet:

var fooarr = [];

for (i = 0; i < 10; i++) {

 

    var foo = { val: i };

    fooarr.push(foo);

    console.log(fooarr[i].val);

 

}

console.log(fooarr[3].val);

 

You will notice here that we are creating the object literals inside a loop and pushing the created object to an array. In the above listing, 10 objects were created, demonstrating that a single object literal can create many new objects if it is inside a loop body or being used repeatedly. In the above code listing, the object literal “foo” is repeating 10 times inside the loop and it is creating 10 new objects. To verify this further we are accessing the 4th object created using the object literal outside the loop.

We need to keep this in mind while working with object literals: one single object literal can create as many new objects as number of times it is used.

 

Creating an object using the new operator or constructor pattern

In the JavaScript we can create the object using the new operator also. When we create an object using the new operator, it is also known as the constructor pattern. When we create an object using the new operator then the new keyword must be followed by a function invocation. In this case function is works as the constructor. In object oriented world constructor is a function used to construct an object. So the invoked function after the new keyword serve as the constructor which constructs the object and returns the constructed object.

Keep in mind that JavaScript does not have classes (up to ECMA 5.0), but it supports special functions called constructors. Just by calling a function after the new operator, we request function to work as a constructor and returns the newly created object. Inside the constructor current object is referred by the this keyword.

To understand it better, let us consider the following listing,

function Rectangle(height, width) {

    this.height = height;

    this.width = width;

 

    this.area = function () {

        returnthis.height * this.width;

    };

};

 

var rec1 = new Rectangle(45, 6);

var rec2 = new Rectangle(8, 7);

var rec1area = rec1.area();

console.log(rec1area);

var rec2area = rec2.area();

console.log(rec2area);

 

In the above listing,

1.       We created a rectangle function

2.       Created object using the new keyword.

3.       Invoked rectangle function is called after the new keyword, hence it worked as constructor

4.       Rectangle constructor returned the created object.

5.       Object is referred with this keyword inside the constructor.

If we call rectangle function without the new operator, it will work as normal JavaScript function. Whereas if we call rectangle function after the new operator, it will work as constructor and return the created object.

Everything is good about the above code but with one problem that the area function is redefined for all the objects. Certainly we do not want this and the area function should be shared among the objects.

 

Object Prototypes

All the objects such as functions in JavaScript contain a prototype object. When we use function as constructor to create object, properties of prototype object get available to the newly created objects.  We can solve the above problem of area function getting redefined using the prototype object of the constructor.

 

function Rectangle(height, width) {

    this.height = height;

    this.width = width;

}

 

Rectangle.prototype.area = function () {

    returnthis.height * this.width;

};

var rec1 = new Rectangle(45, 6);

var rec2 = new Rectangle(8, 7);

var rec1area = rec1.area();

console.log(rec1area);

var rec2area = rec2.area();

console.log(rec2area);

 

In above listing we are creating the area function as the property of the Rectangle prototype. Hence it will be available to all new objects without getting redefined.

Keep in mind that every JavaScript object has a second object associated with it called prototype object. Always the first objects inherits the properties of the prototype object.

 

Object creation using Object.create()

The Object.create() static method was introduced in ECMA Script 5.0. It is used to construct new object. So using the Object.create() a new object can be created as shown in below listing,

 

var foo = Object.create(Object.prototype,

       { name: { value: 'koo' } });

console.log(foo.name);

 

Some important points about Object.create() to remember:

1.       This method takes two arguments:

a.  The first argument is the prototype of the object to be created, and is the required argument

2.       The second arguments is the optional argument, and describes new properties of the newly created object

3.       The first argument can be null, but in that case the new object will not inherit any properties

4.       To create an empty object, you must pass the Object.prototype as the first argument

Let’s say you have an existing object called foo and you want to use foo as a prototype for a new object called koo with the added property of “food”. You can do so by doing this:

 

var foo = {

 

    name: 'steve',

    age: 30

};

 

 

var koo = Object.create(foo,

       { subject: { value: 'koo' } });

 

console.log(koo.name);

console.log(koo.subject);

 

In the above listing, we have an object named foo, and we’re using foo as the prototype of the object named koo. Koo will inherit the properties of foo and it will have its own additional properties also.

 

Conclusion

There are a few different ways to create objects in JavaScript, and in this post we focused on three of them. Stay tuned for the second part of this post where we will focus on:

·         Inheritance

·         Object Properties

·         Properties getters and setters

·         Enumerating Properties Etc.

I hope you find my posts useful - thanks for reading, and happy coding!

 

Simplifying the JavaScript Callback function for .NET developers

$
0
0

 In JavaScript, functions are objects, and they can:

·         Be passed as an argument to another function

·         Return as a value from a function

·         Be assigned to a variable

Let’s assume that you have a JavaScript function (let’s call it function A) with the following properties:

1.       Function A takes another function (let’s call this one function CB) as one of the parameters.

2.       Function A executes the function CB in its body.

In the above scenario, function CB is known as the Callback function. Let’s learn more about it using the following code:

 

function A(param1, param2, CB) {

 

    var result = param1 + param2;

    console.log(result);

    CB(result);

 

}

 

Here we’ve created a function A , which takes three parameters. You will notice that last parameter - CB - is a function, which is being called inside the body of function A. Next we’ll call function A as shown the below listing:

 

function CallBackFunction(result) {

    console.log(result + ' in the CallBack function');

}

 

A(5, 7, CallBackFunction);

 

Here we’ve created a function named CallBackFunction (You can name it whatever you’d like) and we’ve passed it as the third parameter in function A. In its body, function A is executing the passed CallBackFunction.

Another way to pass a Callback function is the anonymous function. See the example here:

 

A(5, 7, function (result) {

 

    console.log(result + ' in the CallBack function');

});

 

How does the Callback function work?

In the called function, we pass the definition of the callback function. Let’s consider the example we took above:

1.       In function A, we are passing the definition of the callback function

2.       Function A has information about the callback function definition

3.       Function A calls the callback function in its body

4.       While calling function A, we pass the callback function

5.       The callback function can be either named or an anonymous function

 

Optional callback function

What would happen if we don’t pass a third parameter (i.e. a callback function) in function A? In that case, an exception will be thrown stating that “undefined” is not a function. A JavaScript function may take more or less arguments. When we call a JavaScript function with less parameters, “undefined” gets passed for the parameters which are not passed. So in the above scenario for function A, when we don’t pass the third argument, undefined gets passed and we get the exception that undefined is not a function.

We need to be sure about following three points while creating a callback function:

1.       Make sure the callback function is passed

2.       If the callback function is not passed, then handle the exception

3.       Ensure that a callback only function is passed, not any literal or other kind of object

We can implement the points above in the snippet below:

 

function A(param1, param2, CB) {

 

    var result = param1 + param2;

    console.log(result);

    if (CB !== undefined && typeof (CB) === "function") {

        CB(result);

    }

 

}

 

In the above listing, we are checking:

1.       Whether the value of CB is undefined or not

2.       Whether the type of CB is a function or not

By checking the two points above, we can make the callback function optional.

 

Callback with asynchronous call

In JavaScript, sometimes you might be required to work with asynchronous methods, for example when:

1.       Reading or writing a file system

2.       Calling web services

3.       Making an AJAX call, etc

The tasks mentioned above can take time and block the execution. While reading from the file system or making an AJAX call, you don’t want to wait. You’ll want to perform following operations as asynchronous. We can use a callback function here to handle the asynchronous operation, so the callback function will be executed when the asynchronous call is completed.

Let’s say that you need to consume service to fetch the data. Without AJAX that can be done as shown in the listing below:

 

getData('serviceurl', writeData);

 

function getData(serviceurl, callback) {

    //service call to bring data

    var dataArray = [123, 456, 789, 012, 345, 678];

    callback(dataArray);

}

 

function writeData(myData) {

    console.log(myData);

}

 

In the AJAX call we can use callback function as shown in the listing below:

 

function GetUser(serviceurl, callback) {

    var request = new XMLHttpRequest();

    request.onreadystatechange = function () // can replace this with callback

    {

        if (request.readyState === 4 && request.status === 200) {

            callback(request.responseText); // using the callback

        }

    };

    request.open('GET', serviceurl);

    //req.setRequestHeader('X-Requested-With', '*');

    request.send(null);

}

 

function DisplayData(data) {

    console.log(data);

}

 

GetUser('serviceurl', DisplayData);

 

As you see here, we’re using the callback function to print the data. We can even replace the function called on onreadystate with the reusable callback function.

That’s about all I’ve got for this post about the callback function – I hope you find it useful, thanks for reading!

Viewing all 2377 articles
Browse latest View live