Find out why users are migrating from Outlook Mac to Outlook 2010 for Windows platform and then know an appropriate solution which will help you to import Mac OLM file to Outlook 2010.

Microsoft for Windows introduced the capabilities of Outlook for Mac users after not much successful Entourage emailing client. This application was popular among users as Outlook 2011 for Mac. It provided all those features that Windows users of Outlook enjoyed. Yet many Mac users want to shift Outlook 2010 for windows. What can be the reason for so many users opting for such kind of decision? Let us discuss the causes behind it and possible solutions that will help them to import Mac OLM file to Outlook 2010 conveniently.

Reasons for Users Migrating to Outlook for users

  • Better search options were not available with Outlook 2011
  • Non availability of folder categorization
  • No dedicated filter operation for searching email messages
  • Prone to virus and often MAC messages get corrupted
  • There are particularly more applications specially meant for Outlook for Windows rather than Outlook for Mac

This is clear that Outlook for Mac failed to impart user-compatible features and this was the very reason for users opting to change to Windows environment. So, let’s now discuss of the proficient solutions that will help users to import the OLM files to PST file format.

Method-1

One way of importing the OLM files of Outlook for Mac 2010 to PST file format of Outlook for Windows is by configuring an IMAP account. The steps for this procedure are given below.

  • Configure an IMAP email account say with Gmail
  • The next step consists of setting the Outlook for Mac account with that of the newly created Gmail IMAP account
  • Synchronization of IMAP account with Outlook for Mac
  • Move all the OLM file data to the IMAP folder of Gmail
  • The next step consists of moving the data from IMAP folder of Gmail to Outlook for Windows
  • The data which will be in Outlook will be eventually in PST file format

Disadvantages

This method takes a lot of time for executing and if the emails are more in number then the method won’t be feasible for users.

Method-2:

If a user’s account has been configured with Outlook 2011 for Mac and possibly with the Outlook account on windows operating system then this method can work out to import the OLM files of Outlook for Mac to Outlook for Windows. Look out for the detailed steps in order to carry out the procedure.

  • First connect the Exchange server mailbox with Outlook on Windows platform in enabled Cached Exchange mode
  • Then use the Powershell cmdlet Export-Mailbox for exporting data from the mailbox of the Exchange server to Outlook PST file format.

Disadvantages

This approach requires the setup of Exchange server which may not be available with all users and simply for importing the OLM files into PST file format they cannot install Exchange server.

Move to A More Efficient Solution

In such cases they can move to an efficient approach of moving the OL files of Mac for Outlook to Outlook 2010 with the help of a feasible solution known as OLM to PST Converter utility. It competently migrates the OLM file data of Mac to Outlook without bringing out any changes in the original format of messages.

http://dns.js.org
http://js.org
http://github.com
https://pages.github.com/

iisexpress-proxy-by-icflorescu.jpg

TL;DR

npm i -g iisexpress-proxy
iisexpress-proxy 51996 to 3000

The story

Are you a .NET developer building mobile web application? Have you ever been frustrated by the fact that there's no easy way to enable IIS Express to accept connections from remote devices?...

Well, join the club. If you're patient enough to dig through the various links coming up on the search above, you'll see that it is possible, but not really straightforward.

However, there's now a much simpler solution available: you can proxy the http traffic to IIS Express using this little Node.js command-line tool I've put together. It's as simple as typing this in the command prompt:

iisexpress-proxy 51996 to 3000

It will show you something like:

IIS Express Proxy 0.1.2
Proxying localhost:51996 to:
- Wi-Fi: 192.168.0.102:3000
- VMware Network Adapter VMnet1: 192.168.192.1:3000
- VMware Network Adapter VMnet8: 192.168.245.1:3000
Listening... [ press Control-C to exit ]

Then you can simply point your tablet or mobile phone at http://192.168.0.102:3000.

Motivation

8 years ago I was doing lots of C#/.NET development. Then I switched to Ruby/RoR, then to Node.js, which I've been using almost exclusively for 3 years. During these last 3 years I've become accustomed to a rich, flourishing, cutting-edge development workflow that just works.

Now I am contemplating the opportunity of working on a large Angujar.js/.NET application. None of those is my technology of choice, but they do seem to be favored by the corporate environment, and after all, that's where the money is.

While Microsoft is slowly reinventing itself, I've come to realize their ecosystem and philosophy are still far from web-developer friendly. For instance, Visual Studio might be a great IDE for building desktop applications, but to be honest I find it rather counter-productive for web development.

However, you can alleviate the pain by borrowing the right tools from the open-source world and especially from the rich Node.js ecosystem.

Sharing is caring

If you like iisexpress-proxy, please don't hesitate to tweet about it!

https://www.youtube.com/watch?v=TjsXqt-UxLo&feature=youtu.be

http://blog.hostilefork.com/where-printf-rubber-meets-road/

A formal piece of document which is essential for gaining degree associated with research work is referred as dissertation.

dissertations.jpg

While preparing dissertation, various issues are faced by students which include:

  • Suitable thesis statement – Students do not give much importance to the thesis statement which is a crucial part of the research. It tells the intention of your dissertation and often statements used are very basic and non-debatable.
  • Suitable literature – Review of literature is another major portion of a dissertation and requires proper research and compilation. Students have hard time preparing this portion due to lack of literary sources.
  • Proper language – A student or researcher may have suitable research findings on a particular dissertation process but it becomes a cumbersome task to convert these findings into proper dissertation due to occurrence of several language related errors.
  • Management of time – While dissertation preparation, students find it difficult to manage time which makes it difficult to complete the dissertation during the time of submission.
  • Proper data – Searching relevant and accurate data which is needed to support the dissertation is also another issue which impacts students while dissertation preparation.

These issues depreciate the quality of dissertation writing and may also lead to various psychological issues like over-stress, insomnia and lack of social connectivity.

So, the students can take support of organizations which provide help associated with dissertation preparation. EssayCorp is one such place which eases the stress of the students and give them affordable service for dissertation writing.

Why choose EssayCorp:

  • Proper formal language – The writers are dedicated in providing the final dissertation with no grammatical errors with use of formal and readable language.
  • Qualified writers – Writers working at the organization have suitable experience and relevant qualification with up to PhD level. So, they are dedicated in delivering final dissertation which is of top notch quality.
  • Analytic thinking based content – Critical thinking is applied by the writers which prepare the dissertation.
  • Originality – All the content used in preparing dissertations is totally original which gives assurance to students that no plagiarism is involved in the process.
  • Interaction with students on regular basis – To ensure that the dissertation writing process is done as per the requirements of the students, the writers remain in touch with students on constant basis.

A dissertation is the most critical entity in a student / researcher’s life. Therefore, students should make sure that the hard work is reflected in the final formal dissertation.

To know more about us, visit here: http://www.essaycorp.com/

Are you guys ever planning to do anything with Sitepoint and/or AirPair? I mean, you guys still have linkages for Code School and Pluralsight individually even though they've merged, so I was rather curious.

rename 's/\d+/sprintf("%03d",$&)/e' *.jpg

1 0
daniellmb shared a code snippet

Creating Custom Fiddler Rules

How can I replace HTTPS traffic from one domain with HTTP from another?

Let's say you have problem on your secure server in production, and you want to debug it, but you can't set up the SSL certificate locally. One solution is to use a custom Fiddler rule to redirect the HTTPS traffic from production to your local development machine over HTTP. The following is a step-by-step walkthrough on how to do just that.

  1. Open Fiddler
  2. In the top menu select Rules > Custom rules

At this point you may be asked: "Would you like to download and install the FiddlerScript Editor?" If this happens:

  1. Choose Yes
  2. Close Fiddler
  3. Install the Editor plugin
  4. Follow the three steps in the first section again

Now you're ready to add custom rules.

  1. From the top menu choose GO > to OnBeforeRequest
  2. Add the following code to the beginning of the function
  3. Save the file and you're done!

// Handle CONNECT Tunnels
if (oSession.HTTPMethodIs("CONNECT"))
{
    oSession["x-replywithtunnel"] = "FakeTunnel";
    return;
 }

// Handle HTTPS requests
var PRODUCTION = "secure.mycompany.com"
var DEVELOPMENT = "dev01.mycompany.com"
if (oSession.isHTTPS &&
   (oSession.HostnameIs(PRODUCTION) || oSession.HostnameIs(DEVELOPMENT))) {
    oSession.fullUrl = "http://" + DEVELOPMENT + oSession.PathAndQuery;
    FiddlerApplication.Log.LogFormat("HTTPS -> HTTP {0}", oSession.fullUrl);
}
1 1
Don Smith made a comment

Relocated

For a while I was maintaining a blog about code-related topics here on Coderbits. I've since consolidated all my blogging, which isn't much, into a single blog and it's being hosted over at blog.locksmithdon.net.

http://www.popgeo.net/?p=69

I've been on a mission to learn R lately. And while I've been thoroughly enjoying the ride, there's nothing like a tangible objective to really turn the learning up to 11, right? Well, I just happen to have some data in SQL Azure that's begging to be mined. This post describes what I did to connect to it from RStudio.

For prosterity, I'm using version 3.1.3 of R inside of RStudio v0.98.1103 on OS X Yosemite (10.10.2). Oh, and I'm a big fan of homebrew, so I'm using it to install some of the important bits.

No DSNs

It's probably worth noting that I'm not using a DSN to connect. I'm just using a normal connection string that lives in the code. This meets my needs better since I'm not deploying to another machine. I also prefer to keep the configuration all together rather than having files scattered in obscure (or even obvious) locations on the filesystem.

Overview

At a high level, this process involves the following steps:

  • Install unixODBC using homebrew
  • Install FreeTDS using homebrew
  • Install RODBC from source using R
  • Connecting to the database

I'm assuming your SQL Azure database is already running and configured. You will need to know the password of the user account that's in the ODBC connection strings provided by Azure. I don't know how to get the password from the Azure interface, or if it's even possible.

Installing unixODBC

This updates homebrew and its formulae, installs wget (you can remove it if you already have it installed), and then installs unixODBC.

brew update && brew install wget unixodbc

Installing FreeTDS

This installs FreeTDS and wires it up to unixODBC.

brew install freetds --with-unixodbc

Installing RODBC

This installs RODBC v1.3-11. Head over here to be sure you're installing the latest version (or at least the latest 1.x) and update the 2 locations in this command as necessary.

wget "http://cran.r-project.org/src/contrib/RODBC_1.3-11.tar.gz" && R CMD INSTALL RODBC_1.3-11.tar.gz

Connecting to the database

Because I'm not using a DSN, there isn't anything to configure. You just need to run a couple of R commands to get the data. First you'll need to grab the ODBC connection string from the Azure portal.

Get the connection string

  • If you're using the old portal, sign in, make sure you're on the right subscription (subscription link at the top of the page), select SQL Databases on the left, select your database name in the list, select Dashboard, and then select the link in the menu on the right side of the page labeled Show connection strings. Copy the ODBC one.

  • If you're using the new portal, sign in, make sure you're on the right subscription (top/right link from the home screen), select Browse in the left menu, select SQL databases, select the name of your database, select Properties, and then select the link labeled Show database connection strings. Select the icon to copy the ODBC connection string to the clipboard.

Getting results

Here are the commands you need to connect and retrieve data.

Create a variable that contains the connection string (replace the driver with "FreeTDS" and add your password):

connection.string <- "Driver={FreeTDS};Server=[your_host].database.windows.net,1433;Database=[your_db];Uid=[your_username]@[your_host];Pwd=[your_password];Encrypt=yes;Connection Timeout=30;" 

Create a database connection:

db <- odbcDriverConnect(connection.string)

Get some data (replace your_table):

data <- sqlQuery(db, "SELECT * FROM [your_table]")

Be sure to check out the RODBC vignette (pdf) and the package docs (pdf) for more information about the R commands available.

0 0
raphaelom made a comment

About Lucene Merge

Lucene store its index using segments and adding new a document is a matter of appending or in case of an update, appending a new version and marking old ones as deleted. This generates a lot of bogus irrelevant data that only hinders information retrieval. To solve this, Lucene cleans itself up using MergePolicies which are strategies to consolidate the index.

This is a video of lucene indexing the wikipedia dump and how the merging occurs upon the threshold, effectively reducing the index size when a new document is added.

I've just recently published a node.js module that wraps the coderbits profile API. It's really simple to use, just install it:

npm install coderbits

and get the data:

var coderbits = require('coderbits');

coderbits('bit', function (error, profile) {
  if (!error) console.log(profile);
});

So if you're planning to do something with node.js and the data provided by coderbits, don't hesitate to use it and leave your opinions, suggestions and bugs found. Pull requests are wellcome.

More info on the module page

How big, lines of code, useful, whatever a program can you write without documentation or an IDE?

How much can you write without needing to compile, run, debug, REPL?

A long time ago I was a bit of proponent of not bothering to memorize functions, argument orderings, obscure syntax etc... of programming languages. I would find myself saying things like "I don't want to clutter my brain with stuff i can just search for in 30seconds."

At some point I found myself coding away from a network or much documentation. This forced me to memorize a bunch of these corner cases and not so corner cases of whatever language/framework I was programming in at the time. I found that it really did pay off. And the reason it paid off was not so much because I saved the 30 seconds when WRITING code, but because the huge amount of time I saved READING code. When i say "read" i mean more then just the jist of the code, but actually read it enough to debug it.

Which brings us to to the biggest take away. You read much more code then write, in fact you are constantly reading code, some that you just wrote, some that you wrote weeks, months, years ago, a lot that you never wrote. Its a hassle and a waste of real time and its additional level of frustration to not have a good mental map of the language you are using.

Of course frustration can come from reading code thats hard to dissect, beautiful code is code that makes the language look like it was made for the problem you are trying to solve. Read Clean Code.

I am posting my first shot at it, but I will be honest, this failed a couple time before compilation (not having quotes around the argument, and not making $argv global inside of functions). I also didn't use in_array because then I would have to look at the docs to see the order of the needle and haystack.

Why not give it a try? How about leave a comment with your code, or link to to your favorite codepen?

<?php

/**
 * return true if array $aList is a subset of array $bList 
 */
function isSubset($aList, $bList) {
  foreach ($aList as $a) {
    $found = false;
    foreach ($bList as $b) {  // needle and haystack of in_array?
    if ($a == $b) {
      $found = true;
          break;
        }
    }
    if (!$found) {
      return false;
    }
  }
  return true;
}

function usage() {
  global $argv;
  echo "Usage: $argv[1] '1,2,3|1,2,4'\n";
  echo "see if the list to left of | is a subset of the list to the right\n";
}

function getListsFromArgs () {
  global $argv;
  if (count($argv) != 2) {
    die(usage());
  }
  $cmd = $argv[1];
  $bits = explode("|", $cmd);

  if (count($bits) != 2) {
    die(usage());
  }

  $aList = explode(",", $bits[0]);
  $bList = explode(",", $bits[1]);

  return array($aList, $bList);
}

list($aList, $bList) = getListsFromArgs();
if (isSubset($aList, $bList)) {
  echo "Its a subset!\n";
} else {
  echo "Not a subset\n";
}
0 1
raphaelom made a comment

Rlang on Ubuntu

And so the journey begins:

sudo apt-get install r-base r-base-dev 

http://nordicapis.com/using-templates-for-documentation-driven-api-design/

The problem

Suppose you have a field in your elasticsearch database that is factually an ID, in such instances, text matching is useless, it's always all or nothing.

The Solution

In solr, you simply using the solr.KeywordTokenizerFactory for mapping you field but in elasticsearch you need to define a mapping to prevent the default behavior. According the the doc, simply use:

{
"mappings" : {
    "products" : {
        "properties" : {
            "productID" : {
                "type" : "string",
                "index" : "keyword" 
            }
        }
    }
}

http://www.rationaljava.com/2015/02/java-8-pitfall-beware-of-fileslines.html

Between typing www.wikipedia.org in the browser and reading about the Unexpected Spanish Iquisition there's a lot of magic going on.

DNS lookup

First the computer has to make sense of the url, that is, translate it into a meaningful IP address where it connect and interact. The OS queries the nameserver for correct record and caches it locally so future queries won't need to hit the remote server.

Estabilishing a connection

With the IP at hand, the OS can estabilish a socket connection between the client machine and the remote server. The stream socket is the base for HTTP communication which is the application level procotol used by websites and is identified by the remote server IP, the service port(in this case the default 80), the local port opened and local network interface IP. These ports are binded and data written to one is relayed to the other.

HTTP kicks in

Request

With the two way connection in place, the browser writes a message to the server:

GET / HTTP/1.1
Host: www.freebsd.org
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.7) Gecko/20050414 Firefox/1.0.3
Accept: text/html
Accept-Language: en-us,en;q=0.5

The first line tells the server the client is using http 1.1 for communication and using the GET verb on the root directory. The client also send several information using HTTP Headers, meta information exchanges by the two peers about the information. For instance, the Accept header tells the server the client knows how to understand HTML and the Accept-Language one says the content should be in english preferably.

Response

The server will then reply with the index.html document sitting the root of the server by writting an HTTP response to the socket. Like the client, the server will respond with several metadata about the content it is streamming. One important point is the status code 200, http have several status codes about server availablity, erros, auth and even if the content replied is up to date or a stale copy.

HTTP/1.1 200 OK
Date: Fri, 13 May 2005 05:51:12 GMT
Server: Apache/1.3.x LaHonda (Unix)
Last-Modified: Fri, 13 May 2005 05:25:02 GMT
Content-Length: 33414
Keep-Alive: timeout=15, max=100
Connection: Keep-Alive
Content-Type: text/html

<!DOCTYPE html>
<html>
<img src="/spanish-inquisition.jpg"/>
</html>

Rendering

The browser reads the html payload and displays it using its rendering engine.

no_one_expects_the_spanish_inquisition_by_simzer-d5bxjqp.png

https://www.youtube.com/watch?v=SeLOt_BRAqc

https://leanpub.com/D3-Tips-and-Tricks

polymer.jpg

The hype

If you're working in web / mobile / UI / UX development and haven't been living under a rock for the last couple of years, you've probably heard about Web Components and Polymer.

The marketing phrase says "Web Components usher in a new era of web development based on encapsulated and interoperable custom elements that extend HTML itself"; lots of people are genuinely excited about a concept that promises to profoundly challenge and change the way we're writing web and mobile applications.

jQuery and a whole plethora of web development frameworks built upon it suddenly seem to fall out of fashion and become obsolete now. Even the future of Angular and React is questioned by some, as Polymer core elements overlap part of the functionality already implemented by those frameworks. On the other hand I've read numerous articles and seen quite a few presentations showing you how you can integrate Polymer in your existing / ongoing Backbone/Marionette / React / Whatever.js project.

Reality check

Well, leaving the marketing story aside, let's try to look beyond the hype. I won't go into technical details (usually that's where the devil is), but Web Component specs have certainly been around for a while, and although everybody agrees we're in desperate need of a better way to deal with the issues they're addressing, the major browsers haven't been exactly quick in providing the native support. Right now, Chrome seems to be the only one offering a full implementation, the guys at Mozilla are still debating whether they should support HTML imports or not, while the guys at Microsoft are, as always, completely out of the loop (no, apparently they won't do it in Spartan either).

Some people said "no problem, we can polyfill the other browsers" (in plain English, polyfill means to compensate the missing features by providing non-native JavaScript implementations written in JavaScript). And thus, X-Tag, Bosonic and Polymer where born. Among them, Polymer seems to be the game-changer to keep an eye on, for a number of reasons, most notable because it looks fantastic and works really well (in Chrome), especially since the addition of material-design Paper Elements.

So, what's the problem if the other contemporary browsers are polyfilled?

We can use Polymer today, right?

Well, maybe... but make sure to check & double-check everything. For one reason, polyfills are great, but performance is one of the issues you should pay special attention to. Don't rely on the hype, decide for yourself. Take your time to study these samples across multiple browsers, OSes and devices and make sure to keep an eye on the developer console to see what happens behind the scenes.

We can integrate Polymer into our Backbone+Marionette / Angular / React based applications, can't we?

Technically speaking, yes, you can. But it doesn't mean you should. Sometimes starting from tabula rasa is just better. Let me give you just one reason (which should pertain to common-sense) that often developers excited about a new technology choose to ignore:

  • jQuery: 82KB (or Zepto: 24KB)
  • Underscore.js: 15KB
  • Backbone.js: 20KB
  • Marionette.js: 39KB
  • WebCompoments.js: 103KB

And most likely your application will actually require a lot more, since you're probably using a touch-friendly slider, Google Maps, etc... and maybe Twitter Bootstrap. And some of them also come with lots of CSS code (fyi, CSS is executed too, and sometimes the execution can be costly). As a side note, an important part of that code inevitably provides duplicate / obsolete functionality.

All that adds up to hundreds of KB of highly compressed JavaScript/CSS code that the browser has to download, understand and execute, without even considering the actual application payload. And that's an optimistic case because Backbone.js + Marionette.js is a lean framework for the functionality it provides.

All this may not seem like much nowadays, but not everybody has an unlimited 4G data plan and the latest flagship smartphone. Developers and tech-savvy people usually do, most normal people don't :-). Which means they'll only get the polyfilled, less-than-ideal UX experience.

I've seen a lot of promising web / mobile projects ending up awfully wrong because of UX code clutter. Sometimes developers without real web experience just "throw up a ThemeForest template or skin on top" of an enterprise application, sometimes they became so accustomed to working on the latest iMac or MacBook Pro that they simply forgot there are people out there using Windows laptops or cheap Android phones.

So, the bottom line is this: if you're brave enough to embrace Polymer today, maybe you should consider not mixing it up with "legacy" jQuery-based codebase. They're not always complementary and the mix will most certainly introduce a cost, aside from the benefits.

I'm writing code in CofeeScript/TypeScript, LESS/Stylus and Jade/HAML template engine, and I pack everything with Browserify. Can I "plug-in" Polymer in my workflow?

Well, good for you. You're probably an adept of terseness and simplicity, like me :-) The bad news is you can't easily integrate Polymer in your workflow - and again, maybe you shouldn't. Among other things, CoffeeScript (which I use constantly and love, btw) appeared to compensate for some of the lacks of pre-ES6/7 JavaScript, and some of those are now polyfilled by webcomponents.js; Polymer was made with Bower in mind and comes with a specific packaging tool called vulcanize (a decision sometimes criticized by JS community members). If you're building a Polymer-based project, there's no real reason to add browserify to the mix, except to show that it's possible.

I'm addicted to LiveReload; since I've discovered it, I simply can't work without it. It works with WebComponents / Polymer, right?

For people who haven't (yet?) heard about it, LiveReload is a tool / set of tools that brings an enormous boost in developer productivity by automatically reloading web assets as they change (images, fonts, stylesheets, scripts), without reloading the entire page. While this at first sight may seem like a trifle, it's actually invaluable: consider you're working on a context-dependent dynamic application, and during the development process you need to bring some visual adjustments by modifying some stylesheets. Without LiveReload, you'd have to hit "refresh", which is no big deal... but what if it takes you a few minutes to reach the same point in the application workflow? Plus, if your server-side is .NET, restarting the debug process in Visual Studio takes forever.

The bad news is that LiveReload doesn't play nicely with Polymer. I've been quite disappointed to discover this, but it doesn't. Updating an element will trigger a full page reload, while modifying a linked stylesheet won't trigger a reload at all. Which kind of defeats the purpose of LiveReload.

"But I've seen a how-to on setting up a Polymer application and they did mention LiveReload", you might say. Yes, people have demoed front-end tooling scenarios for Polymer, but apparently the purpose was mostly "academic" and they did't dwell much on the subject of how LiveReload actually works...

Don't take my word for it, go ahead, try it for yourself.

So, should I use WebComponents & Polymer today?

I'm not saying you shouldn't. On the contrary. We need to move things forward; we need a better web, we desperately need better tools to build it and Polymer definitely has the potential to be a better tool. But don't let your excitement, the marketing hype or the tech buzz-of-the-day cloud your judgement. Make an informed decision and don't expect a silver bullet.

Personally, after two weeks of studying and playing around, I still have mixed feelings about it. I'm not sure I'd use it for a public website with a wide, non tech-savvy audience. But it does look like a safe bet for building a PhoneGap-packaged application for Android devices...

Final thoughts

Everything I wrote above is just a personal opinion regarding the practical state of things today, Feb 17th 2015, as I am evaluating Polymer as an alternative for a personal project. But technology is changing and evolving constantly, so make sure to draw your own conclusions.


This article was initially published on LinkedIn here.

What is Internet Header?

An email consists of mainly three parts that are Envelope, Body of message and Headers. Envelope is what that we can’t see actually it’s an internal process which is responsible for routing section of mails. Body of message is the content that accomplishes the task of sending emails. It’s the part that that we can see, edit, copy, forward or anything that we want. Then comes to our main topic i.e. Internet Header which is most exciting part of an Email but difficult to understand it. Internet Header is basically used to identify routing table and other details that are needed to identify sender, receiver, timing and subject of emails. These Internet Headers plays an essential role in identify the theft so mostly email client user are mainly emphasis on this section while changing the platform. Let’s suppose from Lotus Notes to Outlook email conversion they prefer the tool which has ability to preserve the internet header so that it can be used later. Here is an example which will help you to better understanding in internet headers.

Understand the characteristics of Internet Header

From: It defines who the sender of mail is. It is least reliable to found and shows the email address of sender.

To: it shows the details of addressee.

Subject: it contain the small information about the purpose of sending email.

Date: it stores the details of date and time when the message is sent. It can be shown like “Date: Tue, 13 Jan 2015 16:26:00 +0530”

Return Path: it is same as Reply to and contain the address to return back the mails.

Mime version: it stands for multipurpose internet mail extension. This protocol is used to share images, video and any multimedia graphics on the internet. It can be shown like “MIME-Version: 1.0”

Message id: it is in the form of string assigned by mail system at the time of message created. It can be shown like “Message-ID: <83F79F48-927D-4168-AE17-93FBBB3E846C@email1.serverid >"

Content type: it basically defines the type of format used in text for example plaintext or rich text format. It can be shown like “Content-Type: text/plain; charset="utf-8"”

X-Spam status: it contains the details of spam score usually created by your email client. It can be shows like “X-Antivirus-Status: Clean”

How to view Internet Header of a specific Email in Outlook

If you want to see the internet header for a specific mail then follow these simple steps in Outlook

• In the very first step open the Outlook then go for Mails.

• Open a particular mail then in the tab of Message Option there is Tag option shown in menu bar.

• Click on the side corner of Tag bar to extend it.

• A window of Properties will pop up that contain the information of Headers.

If you are a Lotus Notes user and looking forward for Conversion from Lotus Notes to Outlook then prefer the most trustful tool for accurate conversion in a very affordable price. The tool is implemented with advance technology that will definitely preserve all your Meta data like internet headers, inline images and hyperlinks safely after conversion. For more info visit Lotus Notes to Outlook email conversion tool

Intelligent portfolios for developers and designers.

We build factual up-to-date portfolios from sites you use to showcase your skills, expertise, traits, code, designs, education, and more.


CREATE YOUR FREE PORTFOLIO

RSS feeds

Filter results

Tag

Coder

Full search

Newest opportunities

Newest posts