1 0
daniellmb shared a code snippet

Creating Custom Fiddler Rules

How can I replace HTTPS traffic from one domain with HTTP from another?

Let's say you have problem on your secure server in production, and you want to debug it, but you can't set up the SSL certificate locally. One solution is to use a custom Fiddler rule to redirect the HTTPS traffic from production to your local development machine over HTTP. The following is a step-by-step walkthrough on how to do just that.

  1. Open Fiddler
  2. In the top menu select Rules > Custom rules

At this point you may be asked: "Would you like to download and install the FiddlerScript Editor?" If this happens:

  1. Choose Yes
  2. Close Fiddler
  3. Install the Editor plugin
  4. Follow the three steps in the first section again

Now you're ready to add custom rules.

  1. From the top menu choose GO > to OnBeforeRequest
  2. Add the following code to the beginning of the function
  3. Save the file and you're done!

// Handle CONNECT Tunnels
if (oSession.HTTPMethodIs("CONNECT"))
{
    oSession["x-replywithtunnel"] = "FakeTunnel";
    return;
 }

// Handle HTTPS requests
var PRODUCTION = "secure.mycompany.com"
var DEVELOPMENT = "dev01.mycompany.com"
if (oSession.isHTTPS &&
   (oSession.HostnameIs(PRODUCTION) || oSession.HostnameIs(DEVELOPMENT))) {
    oSession.fullUrl = "http://" + DEVELOPMENT + oSession.PathAndQuery;
    FiddlerApplication.Log.LogFormat("HTTPS -> HTTP {0}", oSession.fullUrl);
}
0 1
Don Smith made a comment

Relocated

For a while I was maintaining a blog about code-related topics here on Coderbits. I've since consolidated all my blogging, which isn't much, into a single blog and it's being hosted over at blog.locksmithdon.net.

http://www.popgeo.net/?p=69

I've been on a mission to learn R lately. And while I've been thoroughly enjoying the ride, there's nothing like a tangible objective to really turn the learning up to 11, right? Well, I just happen to have some data in SQL Azure that's begging to be mined. This post describes what I did to connect to it from RStudio.

For prosterity, I'm using version 3.1.3 of R inside of RStudio v0.98.1103 on OS X Yosemite (10.10.2). Oh, and I'm a big fan of homebrew, so I'm using it to install some of the important bits.

No DSNs

It's probably worth noting that I'm not using a DSN to connect. I'm just using a normal connection string that lives in the code. This meets my needs better since I'm not deploying to another machine. I also prefer to keep the configuration all together rather than having files scattered in obscure (or even obvious) locations on the filesystem.

Overview

At a high level, this process involves the following steps:

  • Install unixODBC using homebrew
  • Install FreeTDS using homebrew
  • Install RODBC from source using R
  • Connecting to the database

I'm assuming your SQL Azure database is already running and configured. You will need to know the password of the user account that's in the ODBC connection strings provided by Azure. I don't know how to get the password from the Azure interface, or if it's even possible.

Installing unixODBC

This updates homebrew and its formulae, installs wget (you can remove it if you already have it installed), and then installs unixODBC.

brew update && brew install wget unixodbc

Installing FreeTDS

This installs FreeTDS and wires it up to unixODBC.

brew install freetds --with-unixodbc

Installing RODBC

This installs RODBC v1.3-11. Head over here to be sure you're installing the latest version (or at least the latest 1.x) and update the 2 locations in this command as necessary.

wget "http://cran.r-project.org/src/contrib/RODBC_1.3-11.tar.gz" && R CMD INSTALL RODBC_1.3-11.tar.gz

Connecting to the database

Because I'm not using a DSN, there isn't anything to configure. You just need to run a couple of R commands to get the data. First you'll need to grab the ODBC connection string from the Azure portal.

Get the connection string

  • If you're using the old portal, sign in, make sure you're on the right subscription (subscription link at the top of the page), select SQL Databases on the left, select your database name in the list, select Dashboard, and then select the link in the menu on the right side of the page labeled Show connection strings. Copy the ODBC one.

  • If you're using the new portal, sign in, make sure you're on the right subscription (top/right link from the home screen), select Browse in the left menu, select SQL databases, select the name of your database, select Properties, and then select the link labeled Show database connection strings. Select the icon to copy the ODBC connection string to the clipboard.

Getting results

Here are the commands you need to connect and retrieve data.

Create a variable that contains the connection string (replace the driver with "FreeTDS" and add your password):

connection.string <- "Driver={FreeTDS};Server=[your_host].database.windows.net,1433;Database=[your_db];Uid=[your_username]@[your_host];Pwd=[your_password];Encrypt=yes;Connection Timeout=30;" 

Create a database connection:

db <- odbcDriverConnect(connection.string)

Get some data (replace your_table):

data <- sqlQuery(db, "SELECT * FROM [your_table]")

Be sure to check out the RODBC vignette (pdf) and the package docs (pdf) for more information about the R commands available.

0 0
raphaelom made a comment

About Lucene Merge

Lucene store its index using segments and adding new a document is a matter of appending or in case of an update, appending a new version and marking old ones as deleted. This generates a lot of bogus irrelevant data that only hinders information retrieval. To solve this, Lucene cleans itself up using MergePolicies which are strategies to consolidate the index.

This is a video of lucene indexing the wikipedia dump and how the merging occurs upon the threshold, effectively reducing the index size when a new document is added.

I've just recently published a node.js module that wraps the coderbits profile API. It's really simple to use, just install it:

npm install coderbits

and get the data:

var coderbits = require('coderbits');

coderbits('bit', function (error, profile) {
  if (!error) console.log(profile);
});

So if you're planning to do something with node.js and the data provided by coderbits, don't hesitate to use it and leave your opinions, suggestions and bugs found. Pull requests are wellcome.

More info on the module page

How big, lines of code, useful, whatever a program can you write without documentation or an IDE?

How much can you write without needing to compile, run, debug, REPL?

A long time ago I was a bit of proponent of not bothering to memorize functions, argument orderings, obscure syntax etc... of programming languages. I would find myself saying things like "I don't want to clutter my brain with stuff i can just search for in 30seconds."

At some point I found myself coding away from a network or much documentation. This forced me to memorize a bunch of these corner cases and not so corner cases of whatever language/framework I was programming in at the time. I found that it really did pay off. And the reason it paid off was not so much because I saved the 30 seconds when WRITING code, but because the huge amount of time I saved READING code. When i say "read" i mean more then just the jist of the code, but actually read it enough to debug it.

Which brings us to to the biggest take away. You read much more code then write, in fact you are constantly reading code, some that you just wrote, some that you wrote weeks, months, years ago, a lot that you never wrote. Its a hassle and a waste of real time and its additional level of frustration to not have a good mental map of the language you are using.

Of course frustration can come from reading code thats hard to dissect, beautiful code is code that makes the language look like it was made for the problem you are trying to solve. Read Clean Code.

I am posting my first shot at it, but I will be honest, this failed a couple time before compilation (not having quotes around the argument, and not making $argv global inside of functions). I also didn't use in_array because then I would have to look at the docs to see the order of the needle and haystack.

Why not give it a try? How about leave a comment with your code, or link to to your favorite codepen?

<?php

/**
 * return true if array $aList is a subset of array $bList 
 */
function isSubset($aList, $bList) {
  foreach ($aList as $a) {
    $found = false;
    foreach ($bList as $b) {  // needle and haystack of in_array?
    if ($a == $b) {
      $found = true;
          break;
        }
    }
    if (!$found) {
      return false;
    }
  }
  return true;
}

function usage() {
  global $argv;
  echo "Usage: $argv[1] '1,2,3|1,2,4'\n";
  echo "see if the list to left of | is a subset of the list to the right\n";
}

function getListsFromArgs () {
  global $argv;
  if (count($argv) != 2) {
    die(usage());
  }
  $cmd = $argv[1];
  $bits = explode("|", $cmd);

  if (count($bits) != 2) {
    die(usage());
  }

  $aList = explode(",", $bits[0]);
  $bList = explode(",", $bits[1]);

  return array($aList, $bList);
}

list($aList, $bList) = getListsFromArgs();
if (isSubset($aList, $bList)) {
  echo "Its a subset!\n";
} else {
  echo "Not a subset\n";
}
0 1
raphaelom made a comment

Rlang on Ubuntu

And so the journey begins:

sudo apt-get install r-base r-base-dev 

http://nordicapis.com/using-templates-for-documentation-driven-api-design/

The problem

Suppose you have a field in your elasticsearch database that is factually an ID, in such instances, text matching is useless, it's always all or nothing.

The Solution

In solr, you simply using the solr.KeywordTokenizerFactory for mapping you field but in elasticsearch you need to define a mapping to prevent the default behavior. According the the doc, simply use:

{
"mappings" : {
    "products" : {
        "properties" : {
            "productID" : {
                "type" : "string",
                "index" : "keyword" 
            }
        }
    }
}

http://www.rationaljava.com/2015/02/java-8-pitfall-beware-of-fileslines.html

Between typing www.wikipedia.org in the browser and reading about the Unexpected Spanish Iquisition there's a lot of magic going on.

DNS lookup

First the computer has to make sense of the url, that is, translate it into a meaningful IP address where it connect and interact. The OS queries the nameserver for correct record and caches it locally so future queries won't need to hit the remote server.

Estabilishing a connection

With the IP at hand, the OS can estabilish a socket connection between the client machine and the remote server. The stream socket is the base for HTTP communication which is the application level procotol used by websites and is identified by the remote server IP, the service port(in this case the default 80), the local port opened and local network interface IP. These ports are binded and data written to one is relayed to the other.

HTTP kicks in

Request

With the two way connection in place, the browser writes a message to the server:

GET / HTTP/1.1
Host: www.freebsd.org
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.7) Gecko/20050414 Firefox/1.0.3
Accept: text/html
Accept-Language: en-us,en;q=0.5

The first line tells the server the client is using http 1.1 for communication and using the GET verb on the root directory. The client also send several information using HTTP Headers, meta information exchanges by the two peers about the information. For instance, the Accept header tells the server the client knows how to understand HTML and the Accept-Language one says the content should be in english preferably.

Response

The server will then reply with the index.html document sitting the root of the server by writting an HTTP response to the socket. Like the client, the server will respond with several metadata about the content it is streamming. One important point is the status code 200, http have several status codes about server availablity, erros, auth and even if the content replied is up to date or a stale copy.

HTTP/1.1 200 OK
Date: Fri, 13 May 2005 05:51:12 GMT
Server: Apache/1.3.x LaHonda (Unix)
Last-Modified: Fri, 13 May 2005 05:25:02 GMT
Content-Length: 33414
Keep-Alive: timeout=15, max=100
Connection: Keep-Alive
Content-Type: text/html

<!DOCTYPE html>
<html>
<img src="/spanish-inquisition.jpg"/>
</html>

Rendering

The browser reads the html payload and displays it using its rendering engine.

no_one_expects_the_spanish_inquisition_by_simzer-d5bxjqp.png

https://www.youtube.com/watch?v=SeLOt_BRAqc

https://leanpub.com/D3-Tips-and-Tricks

polymer.jpg

The hype

If you're working in web / mobile / UI / UX development and haven't been living under a rock for the last couple of years, you've probably heard about Web Components and Polymer.

The marketing phrase says "Web Components usher in a new era of web development based on encapsulated and interoperable custom elements that extend HTML itself"; lots of people are genuinely excited about a concept that promises to profoundly challenge and change the way we're writing web and mobile applications.

jQuery and a whole plethora of web development frameworks built upon it suddenly seem to fall out of fashion and become obsolete now. Even the future of Angular and React is questioned by some, as Polymer core elements overlap part of the functionality already implemented by those frameworks. On the other hand I've read numerous articles and seen quite a few presentations showing you how you can integrate Polymer in your existing / ongoing Backbone/Marionette / React / Whatever.js project.

Reality check

Well, leaving the marketing story aside, let's try to look beyond the hype. I won't go into technical details (usually that's where the devil is), but Web Component specs have certainly been around for a while, and although everybody agrees we're in desperate need of a better way to deal with the issues they're addressing, the major browsers haven't been exactly quick in providing the native support. Right now, Chrome seems to be the only one offering a full implementation, the guys at Mozilla are still debating whether they should support HTML imports or not, while the guys at Microsoft are, as always, completely out of the loop (no, apparently they won't do it in Spartan either).

Some people said "no problem, we can polyfill the other browsers" (in plain English, polyfill means to compensate the missing features by providing non-native JavaScript implementations written in JavaScript). And thus, X-Tag, Bosonic and Polymer where born. Among them, Polymer seems to be the game-changer to keep an eye on, for a number of reasons, most notable because it looks fantastic and works really well (in Chrome), especially since the addition of material-design Paper Elements.

So, what's the problem if the other contemporary browsers are polyfilled?

We can use Polymer today, right?

Well, maybe... but make sure to check & double-check everything. For one reason, polyfills are great, but performance is one of the issues you should pay special attention to. Don't rely on the hype, decide for yourself. Take your time to study these samples across multiple browsers, OSes and devices and make sure to keep an eye on the developer console to see what happens behind the scenes.

We can integrate Polymer into our Backbone+Marionette / Angular / React based applications, can't we?

Technically speaking, yes, you can. But it doesn't mean you should. Sometimes starting from tabula rasa is just better. Let me give you just one reason (which should pertain to common-sense) that often developers excited about a new technology choose to ignore:

  • jQuery: 82KB (or Zepto: 24KB)
  • Underscore.js: 15KB
  • Backbone.js: 20KB
  • Marionette.js: 39KB
  • WebCompoments.js: 103KB

And most likely your application will actually require a lot more, since you're probably using a touch-friendly slider, Google Maps, etc... and maybe Twitter Bootstrap. And some of them also come with lots of CSS code (fyi, CSS is executed too, and sometimes the execution can be costly). As a side note, an important part of that code inevitably provides duplicate / obsolete functionality.

All that adds up to hundreds of KB of highly compressed JavaScript/CSS code that the browser has to download, understand and execute, without even considering the actual application payload. And that's an optimistic case because Backbone.js + Marionette.js is a lean framework for the functionality it provides.

All this may not seem like much nowadays, but not everybody has an unlimited 4G data plan and the latest flagship smartphone. Developers and tech-savvy people usually do, most normal people don't :-). Which means they'll only get the polyfilled, less-than-ideal UX experience.

I've seen a lot of promising web / mobile projects ending up awfully wrong because of UX code clutter. Sometimes developers without real web experience just "throw up a ThemeForest template or skin on top" of an enterprise application, sometimes they became so accustomed to working on the latest iMac or MacBook Pro that they simply forgot there are people out there using Windows laptops or cheap Android phones.

So, the bottom line is this: if you're brave enough to embrace Polymer today, maybe you should consider not mixing it up with "legacy" jQuery-based codebase. They're not always complementary and the mix will most certainly introduce a cost, aside from the benefits.

I'm writing code in CofeeScript/TypeScript, LESS/Stylus and Jade/HAML template engine, and I pack everything with Browserify. Can I "plug-in" Polymer in my workflow?

Well, good for you. You're probably an adept of terseness and simplicity, like me :-) The bad news is you can't easily integrate Polymer in your workflow - and again, maybe you shouldn't. Among other things, CoffeeScript (which I use constantly and love, btw) appeared to compensate for some of the lacks of pre-ES6/7 JavaScript, and some of those are now polyfilled by webcomponents.js; Polymer was made with Bower in mind and comes with a specific packaging tool called vulcanize (a decision sometimes criticized by JS community members). If you're building a Polymer-based project, there's no real reason to add browserify to the mix, except to show that it's possible.

I'm addicted to LiveReload; since I've discovered it, I simply can't work without it. It works with WebComponents / Polymer, right?

For people who haven't (yet?) heard about it, LiveReload is a tool / set of tools that brings an enormous boost in developer productivity by automatically reloading web assets as they change (images, fonts, stylesheets, scripts), without reloading the entire page. While this at first sight may seem like a trifle, it's actually invaluable: consider you're working on a context-dependent dynamic application, and during the development process you need to bring some visual adjustments by modifying some stylesheets. Without LiveReload, you'd have to hit "refresh", which is no big deal... but what if it takes you a few minutes to reach the same point in the application workflow? Plus, if your server-side is .NET, restarting the debug process in Visual Studio takes forever.

The bad news is that LiveReload doesn't play nicely with Polymer. I've been quite disappointed to discover this, but it doesn't. Updating an element will trigger a full page reload, while modifying a linked stylesheet won't trigger a reload at all. Which kind of defeats the purpose of LiveReload.

"But I've seen a how-to on setting up a Polymer application and they did mention LiveReload", you might say. Yes, people have demoed front-end tooling scenarios for Polymer, but apparently the purpose was mostly "academic" and they did't dwell much on the subject of how LiveReload actually works...

Don't take my word for it, go ahead, try it for yourself.

So, should I use WebComponents & Polymer today?

I'm not saying you shouldn't. On the contrary. We need to move things forward; we need a better web, we desperately need better tools to build it and Polymer definitely has the potential to be a better tool. But don't let your excitement, the marketing hype or the tech buzz-of-the-day cloud your judgement. Make an informed decision and don't expect a silver bullet.

Personally, after two weeks of studying and playing around, I still have mixed feelings about it. I'm not sure I'd use it for a public website with a wide, non tech-savvy audience. But it does look like a safe bet for building a PhoneGap-packaged application for Android devices...

Final thoughts

Everything I wrote above is just a personal opinion regarding the practical state of things today, Feb 17th 2015, as I am evaluating Polymer as an alternative for a personal project. But technology is changing and evolving constantly, so make sure to draw your own conclusions.


This article was initially published on LinkedIn here.

What is Internet Header?

An email consists of mainly three parts that are Envelope, Body of message and Headers. Envelope is what that we can’t see actually it’s an internal process which is responsible for routing section of mails. Body of message is the content that accomplishes the task of sending emails. It’s the part that that we can see, edit, copy, forward or anything that we want. Then comes to our main topic i.e. Internet Header which is most exciting part of an Email but difficult to understand it. Internet Header is basically used to identify routing table and other details that are needed to identify sender, receiver, timing and subject of emails. These Internet Headers plays an essential role in identify the theft so mostly email client user are mainly emphasis on this section while changing the platform. Let’s suppose from Lotus Notes to Outlook email conversion they prefer the tool which has ability to preserve the internet header so that it can be used later. Here is an example which will help you to better understanding in internet headers.

Understand the characteristics of Internet Header

From: It defines who the sender of mail is. It is least reliable to found and shows the email address of sender.

To: it shows the details of addressee.

Subject: it contain the small information about the purpose of sending email.

Date: it stores the details of date and time when the message is sent. It can be shown like “Date: Tue, 13 Jan 2015 16:26:00 +0530”

Return Path: it is same as Reply to and contain the address to return back the mails.

Mime version: it stands for multipurpose internet mail extension. This protocol is used to share images, video and any multimedia graphics on the internet. It can be shown like “MIME-Version: 1.0”

Message id: it is in the form of string assigned by mail system at the time of message created. It can be shown like “Message-ID: <83F79F48-927D-4168-AE17-93FBBB3E846C@email1.serverid >"

Content type: it basically defines the type of format used in text for example plaintext or rich text format. It can be shown like “Content-Type: text/plain; charset="utf-8"”

X-Spam status: it contains the details of spam score usually created by your email client. It can be shows like “X-Antivirus-Status: Clean”

How to view Internet Header of a specific Email in Outlook

If you want to see the internet header for a specific mail then follow these simple steps in Outlook

• In the very first step open the Outlook then go for Mails.

• Open a particular mail then in the tab of Message Option there is Tag option shown in menu bar.

• Click on the side corner of Tag bar to extend it.

• A window of Properties will pop up that contain the information of Headers.

If you are a Lotus Notes user and looking forward for Conversion from Lotus Notes to Outlook then prefer the most trustful tool for accurate conversion in a very affordable price. The tool is implemented with advance technology that will definitely preserve all your Meta data like internet headers, inline images and hyperlinks safely after conversion. For more info visit Lotus Notes to Outlook email conversion tool

0 0
jalbertbowden shared a link

Nested Links

http://kizu.ru/en/fun/nested-links/

http://www.oreilly.com/data/free/women-in-data.csp

I have been really getting into Veritasium's youtube channel which deals with a verity of of science, mostly physics, topics. One video that I was very inspired by was about Bell's Quantum Entanglement experiment, which you can view here. Then my source code for Quantum Entanglement in Rust.

qe-post.png

To see that I really understand it I thought that I would build a simple simulation of the two hypothesis. Either the particles share hidden information or the particles instantaneously deiced which spin they will have regardless of distance.

This was pretty fun! I was able to try out the rustic ways of doing benchmarking, unit testing and the match operator (algebraic types).

Wrote up a quick little tutorial on creating a music visualizer with Web Audio on Canvas: http://tybenz.com/post/visualizr/

Screen Shot 2015-01-30 at 3.10.52 PM.png

My current Klout score is 62 yet Coderbits seens to still have me at a score of 58 - How do I fix this? BTW although I just got 62 today my score has been over 60 for several weeks so I'm not sure why I'm still showing 58.

http://www.sitepoint.com/php-7-revolution-return-types-removed-artifacts/ With the planned date for PHP 7’s release rapidly approaching, the internals group is hard at work trying to fix our beloved language as much as possible by both removing artifacts and adding some long desired features. There are many RFCs we could study and discuss, but in this post, I’d like to focus on three that grabbed my attention.

http://www.sitepoint.com/encrypt-large-messages-asymmetric-keys-phpseclib/ Most of us understand the need to encrypt sensitive data before transmitting it. Encryption is the process of translating plaintext (i.e. normal data) into ciphertext (i.e. secret data). During encryption, plaintext information is translated to ciphertext using a key and an algorithm. To read the data, the ciphertext must be decrypted (i.e. translated back to plaintext) using a key and an algorithm.

Intelligent portfolios for developers and designers.

We build factual up-to-date portfolios from sites you use to showcase your skills, expertise, traits, code, designs, education, and more.


CREATE YOUR FREE PORTFOLIO

RSS feeds

Filter results

Tag

Coder

Full search

Newest opportunities

Newest posts