Tag Archive | "from"

How The Internet Happened: From Netscape to the iPhone

Brian McCullough, who runs Internet History Podcast, also wrote a book named How The Internet Happened: From Netscape to the iPhone which did a fantastic job of capturing the ethos of the early web and telling the backstory of so many people & projects behind it’s evolution.

I think the quote which best the magic of the early web is

Jim Clark came from the world of machines and hardware, where development schedules were measured in years—even decades—and where “doing a startup” meant factories, manufacturing, inventory, shipping schedules and the like. But the Mosaic team had stumbled upon something simpler. They had discovered that you could dream up a product, code it, release it to the ether and change the world overnight. Thanks to the Internet, users could download your product, give you feedback on it, and you could release an update, all in the same day. In the web world, development schedules could be measured in weeks.

The part I bolded in the above quote from the book really captures the magic of the Internet & what pulled so many people toward the early web.

The current web – dominated by never-ending feeds & a variety of closed silos – is a big shift from the early days of web comics & other underground cool stuff people created & shared because they thought it was neat.

Many established players missed the actual direction of the web by trying to create something more akin to the web of today before the infrastructure could support it. Many of the “big things” driving web adoption relied heavily on chance luck – combined with a lot of hard work & a willingness to be responsive to feedback & data.

  • Even when Marc Andreessen moved to the valley he thought he was late and he had “missed the whole thing,” but he saw the relentless growth of the web & decided making another web browser was the play that made sense at the time.
  • Tim Berners-Lee was dismayed when Andreessen’s web browser enabled embedded image support in web documents.
  • Early Amazon review features were originally for editorial content from Amazon itself. Bezos originally wanted to launch a broad-based Amazon like it is today, but realized it would be too capital intensive & focused on books off the start so he could sell a known commodity with a long tail. Amazon was initially built off leveraging 2 book distributors ( Ingram and Baker & Taylor) & R. R. Bowker’s Books In Print catalog. They also did clever hacks to meet minimum order requirements like ordering out of stock books as part of their order, so they could only order what customers had purchased.
  • eBay began as an /aw/ subfolder on the eBay domain name which was hosted on a residential internet connection. Pierre Omidyar coded the auction service over labor day weekend in 1995. The domain had other sections focused on topics like ebola. It was switched from AuctionWeb to a stand alone site only after the ISP started charging for a business line. It had no formal Paypal integration or anything like that, rather when listings started to charge a commission, merchants would mail physical checks in to pay for the platform share of their sales. Beanie Babies also helped skyrocket platform usage.
  • The reason AOL carpet bombed the United States with CDs – at their peak half of all CDs produced were AOL CDs – was their initial response rate was around 10%, a crazy number for untargeted direct mail.
  • Priceline was lucky to have survived the bubble as their idea was to spread broadly across other categories beyond travel & they were losing about $ 30 per airline ticket sold.
  • The broader web bubble left behind valuable infrastructure like unused fiber to fuel continued growth long after the bubble popped. The dot com bubble was possible in part because there was a secular bull market in bonds stemming back to the early 1980s & falling debt service payments increased financial leverage and company valuations.
  • TED members hissed at Bill Gross when he unveiled GoTo.com, which ranked “search” results based on advertiser bids.
  • Excite turned down offering the Google founders $ 1.6 million for the PageRank technology in part because Larry Page insisted to Excite CEO George Bell ‘If we come to work for Excite, you need to rip out all the Excite technology and replace it with [our] search.’ And, ultimately, that’s—in my recollection—where the deal fell apart.”
  • Steve Jobs initially disliked the multi-touch technology that mobile would rely on, one of the early iPhone prototypes had the iPod clickwheel, and Apple was against offering an app store in any form. Steve Jobs so loathed his interactions with the record labels that he did not want to build a phone & first licensed iTunes to Motorola, where they made the horrible ROKR phone. He only ended up building a phone after Cingular / AT&T begged him to.
  • Wikipedia was originally launched as a back up feeder site that was to feed into Nupedia.
  • Even after Facebook had strong traction, Marc Zuckerberg kept working on other projects like a file sharing service. Facebook’s news feed was publicly hated based on the complaints, but it almost instantly led to a doubling of usage of the site so they never dumped it. After spreading from college to college Facebook struggled to expand ad other businesses & opening registration up to all was a hail mary move to see if it would rekindle growth instead of selling to Yahoo! for a billion dollars.

The book offers a lot of color to many important web related companies.

And many companies which were only briefly mentioned also ran into the same sort of lucky breaks the above companies did. Paypal was heavily reliant on eBay for initial distribution, but even that was something they initially tried to block until it became so obvious they stopped fighting it:

“At some point I sort of quit trying to stop the EBay users and mostly focused on figuring out how to not lose money,” Levchin recalls. … In the late 2000s, almost a decade after it first went public, PayPal was drifting toward obsolescence and consistently alienating the small businesses that paid it to handle their online checkout. Much of the company’s code was being written offshore to cut costs, and the best programmers and designers had fled the company. … PayPal’s conversion rate is lights-out: Eighty-nine percent of the time a customer gets to its checkout page, he makes the purchase. For other online credit and debit card transactions, that number sits at about 50 percent.

Here is a podcast interview of Brian McCullough by Chris Dixon.

How The Internet Happened: From Netscape to the iPhone is a great book well worth a read for anyone interested in the web.

Categories: 

SEO Book

Posted in IM NewsComments Off

Get more from your customer data with marketing automation

Thursday, February 21, at 1:00 PM ET (10:00 AM PT)



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in IM NewsComments Off

Claire Giovino: From Zero To Six Figures, Behind The Scenes Of The First Year Of InboxDone.com

[ Download MP3 | Transcript Coming Soon | iTunes | Soundcloud | Raw RSS ] Today’s podcast is a little bit different. My guest is Claire Giovino, who is my partner and co-founder in our startup company InboxDone.com. Claire is also the voice behind the intro to this podcast, so you might recognize her immediately :-) . […]

The post Claire Giovino: From Zero To Six Figures, Behind The Scenes Of The First Year Of InboxDone.com appeared first on Yaro.Blog.

Entrepreneurs-Journey.com by Yaro Starak

Posted in IM NewsComments Off

Nofollow couldn’t save the Google webmaster blog from comment spam

Google’s plan of “preventing comment spam” with the nofollow link attribute didn’t work for its own webmaster blog.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in IM NewsComments Off

India is planning to achieve 50 GW of prodction from renewbale energy by 2028




style="display:inline-block;width:300px;height:250px"
data-ad-client="ca-pub-7815236958543991"
data-ad-slot="3672884813">

India is planning to achieve 50 gigawatt (GW) of production from renewable energy by 2028, in order to get to its goal of 40 per cent of electricity generation from non-fossil fuels by 2030, Ministry of New and Renewable Energy secretary, Anand Kumar said at the India-Norway Business Summit 2019 in New Delhi.

Of this 500 GW, 350 GW would come from solar, 140 GW wind, and the remaining generation capacity would come from small hydro and biomass power.

“This figure excludes large hydro. If we take large hydro into account the figure will grow to 560 GW to 575 GW. To reach this figure we have to bid out 30 GW of solar energy and 10 GW of wind energy every year,” Kumar said.

He added that India’s requirement for electricity generation capacity may reach 840 GW by 2030 if the country’s Gross Domestic Product (GDP) grows at a rate of 6.5 per cent.

“Out of 840 GW, we plan to install a little more than 500 GW in renewables. We have installed 75 GW renewable energy capacity in the country and another 46 GW is under various stages of installations,” added Kumar.

Latest solar news

Posted in IM NewsComments Off

12 Methods to Get from Blank Page to First Draft

If you’re like me, after taking some time off from writing, you’re refreshed and champing at the bit to translate…

The post 12 Methods to Get from Blank Page to First Draft appeared first on Copyblogger.


Copyblogger

Posted in IM NewsComments Off

The Results of Our ‘Secret Contest’: 5 Winning Blog Posts from Our Certification Community

Did you know that Copyblogger certifies terrific content marketers? Well, we do, and we’ve been thinking about more ways we…

The post The Results of Our ‘Secret Contest’:
5 Winning Blog Posts from Our Certification Community
appeared first on Copyblogger.


Copyblogger

Posted in IM NewsComments Off

3 Big Lessons from Interviewing John Mueller at SearchLove London – Whiteboard Friday

Posted by willcritchlow

When you’ve got one of Google’s most helpful and empathetic voices willing to answer your most pressing SEO questions, what do you ask? Will Critchlow recently had the honor of interviewing Google’s John Mueller at SearchLove London, and in this week’s edition of Whiteboard Friday he shares his best lessons from that session, covering the concept of Domain Authority, the great subdomain versus subfolder debate, and a view into the technical workings of noindex/nofollow.

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Hi, Whiteboard Friday fans. I’m Will Critchlow from Distilled, and I found myself in Seattle, wanted to record another Whiteboard Friday video and talk through some things that I learned recently when I got to sit down with John Mueller from Google at our SearchLove London conference recently.

So I got to interview John on stage, and, as many of you may know, John is a webmaster relations guy at Google and really a point of contact for many of us in the industry when there are technical questions or questions about how Google is treating different things. If you followed some of the stuff that I’ve written and talked about in the past, you’ll know that I’ve always been a little bit suspicious of some of the official lines that come out of Google and felt like either we don’t get the full story or we haven’t been able to drill in deep enough and really figure out what’s going on.

I was under no illusions that I might be able to completely fix this this in one go, but I did want to grill John on a couple of specific things where I felt like we hadn’t maybe asked things clearly enough or got the full story. Today I wanted to run through a few things that I learned when John and I sat down together. A little side note, I found it really fascinating doing this kind of interview. I sat on stage in a kind of journalistic setting. I had never done this before. Maybe I’ll do a follow-up Whiteboard Friday one day on things I learned and how to run interviews.

1. Does Google have a “Domain Authority” concept?

But the first thing that I wanted to quiz John about was this domain authority idea. So here we are on Moz. Moz has a proprietary metric called domain authority, DA. I feel like when, as an industry, we’ve asked Google, and John in particular, about this kind of thing in the past, does Google have a concept of domain authority, it’s got bundled up with feeling like, oh, he’s had an easy way out of being able to answer and say, “No, no, that’s a proprietary Moz metric. We don’t have that.”

I felt like that had got a bit confusing, because our suspicion is that there is some kind of an authority or a trust metric that Google has and holds at a domain level. We think that’s true, but we felt like they had always been able to wriggle out of answering the question. So I said to John, “Okay, I am not asking you do you use Moz’s domain authority metric in your ranking factors. Like we know that isn’t the case. But do you have something a little bit like it?”

Yes, Google has metrics that map into similar things

John said yes. He said yes, they have metrics that, his exact quote was, “map into similar things.”My way of phrasing this was this is stuff that is at the domain level. It’s based on things like link authority, and it is something that is used to understand performance or to rank content across an entire domain. John said yes, they have something similar to that.

New content inherits those metrics

They use it in particular when they discover new content on an existing domain. New content, in some sense, can inherit some of the authority from the domain, and this is part of the reason why we figured they must have something like this, because we’ve seen identical content perform differently on different sites. We know that there’s something to this. So yes, John confirmed that until they have some of those metrics developed, when they’ve seen a bit of content for long enough, and it can have its own link metrics and usage metrics, in the intervening time up until that point it can inherit some of this stuff from the domain.

Not wholly link-based

He did also just confirm that it’s not just link-based. This is not just a domain-level PageRank type thing.

2. Subdomains versus subfolders

This led me into the second thing that I really wanted to get out of him, which was — and when I raised this, I got kind of an eye roll, “Are we really going down this rabbit hole” — the subdomain versus subfolder question. You might have seen me talk about this. You might have seen people like Rand talk about this, where we’ve seen cases and we have case studies of moving blog.example.com to example.com/blog and changing nothing else and getting an uplift.

We know something must be going on, and yet the official line out of Google has for a very long time been: “We don’t treat these things differently. There is nothing special about subfolders. We’re perfectly happy with subdomains. Do whatever is right for your business.” We’ve had this kind of back-and-forth a few times. The way I put it to John was I said, “We have seen these case studies. How would you explain this?”

They try to figure out what belongs to the site

To his credit, John said, “Yes, we’ve seen them as well.” So he said, yes, Google has also seen these things. He acknowledged this is true. He acknowledged that it happens. The way he explained it connects back into this Domain Authority thing in my mind, which is to say that the way they think about it is: Are these pages on this subdomain part of the same website as things on the main domain?

That’s kind of the main question. They try and figure out, as he put it, “what belongs to this site.” We all know of sites where subdomains are entirely different sites. If you think about a blogspot.com or a WordPress.com domain, subdomains might be owned and managed by entirely different people, and there would be no reason for that authority to pass across. But what Google is trying to do and is trying to say, “Is this subdomain part of this main site?”

Sometimes this includes subdomains and sometimes not

He said sometimes they determine that it is, and sometimes they determine that it is not. If it is part of the site, in their estimation, then they will treat it as equivalent to a subfolder. This, for me, pretty much closes this loop. I think we understand each other now, which is Google is saying, in these certain circumstances, they will be treated identically, but there are circumstances where it can be treated differently.

My recommendation stays what it’s always been, which is 100% if you’re starting from the outset, put it on a subfolder. There’s no upside to the subdomain. Why would you risk the fact that Google might treat it as a separate site? If it is currently on a subdomain, then it’s a little trickier to make that case. I would personally be arguing for the integration and for making that move.

If it’s treated as part of the site, a subdomain is equivalent to a subfolder

But unfortunately, but somewhat predictably, I couldn’t tie John down to any particular way of telling if this is the case. If your content is currently on a subdomain, there isn’t really any way of telling if Google is treating it differently, which is a shame, but it’s somewhat predictable. But at least we understand each other now, and I think we’ve kind of got to the root of the confusion. These case studies are real. This is a real thing. Certainly in certain circumstances moving from the subdomain to the subfolder can improve performance.

3. Noindex’s impact on nofollow

The third thing that I want to talk about is a little bit more geeked out and technical, and also, in some sense, it leads to some bigger picture lessons and thinking. A little while ago John kind of caught us out by talking about how if you have a page that you no index and keep it that way for a long time, that Google will eventually treat that equivalently to a no index, no follow.

In the long-run, a noindex page’s links effectively become nofollow

In other words, the links off that page, even if you’ve got it as a no index, follow, the links off that page will be effectively no followed. We found that a little bit confusing and surprising. I mean I certainly felt like I had assumed it didn’t work that way simply because they have the no index, follow directive, and the fact that that’s a thing seems to suggest that it ought to work that way.

It’s been this way for a long time

It wasn’t really so much about the specifics of this, but more the like: How did we not know this? How did this come about and so forth? John talked about how, firstly, it has been this way for a long time. I think he was making the point none of you all noticed, so how big a deal can this really be? I put it back to him that this is kind of a subtle thing and very hard to test, very hard to extract out the different confounding factors that might be going on.

I’m not surprised that, as an industry, we missed it. But the point being it’s been this way for a long time, and Google’s view and certainly John’s view was that this hadn’t been hidden from us so much as the people who knew this hadn’t realized that they needed to tell anyone. The actual engineers working on the search algorithm, they had a curse of knowledge.

The curse of knowledge: engineers didn’t realize webmasters had the wrong idea

They knew it worked this way, and they had never realized that webmasters didn’t know that or thought any differently. This was one of the things that I was kind of trying to push to John a little more was kind of saying, “More of this, please. Give us more access to the engineers. Give us more insight into their way of thinking. Get them to answer more questions, because then out of that we’ll spot the stuff that we can be like, ‘Oh, hey, that thing there, that was something I didn’t know.’ Then we can drill deeper into that.”

That led us into a little bit of a conversation about how John operates when he doesn’t know the answer, and so there were some bits and pieces that were new to me at least about how this works. John said he himself is generally not attending search quality meetings. The way he works is largely off his knowledge and knowledge base type of content, but he has access to engineers.

They’re not dedicated to the webmaster relations operation. He’s just going around the organization, finding individual Google engineers to answer these questions. It was somewhat interesting to me at least to find that out. I think hopefully, over time, we can generally push and say, “Let’s look for those engineers. John, bring them to the front whenever they want to be visible, because they’re able to answer these kinds of questions that might just be that curse of knowledge that they knew this all along and we as marketers hadn’t figured out this was how things worked.”

That was my quick run-through of some of the things that I learned when I interviewed John. We’ll link over to more resources and transcripts and so forth. But it’s been a blast. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in IM NewsComments Off

3 Ways to Persuade People Thinking about Buying from You

You shouldn’t think about growing your audience. Actually, let me rephrase that: You shouldn’t focus on growing your audience. Especially…

The post 3 Ways to Persuade People Thinking about Buying from You appeared first on Copyblogger.


Copyblogger

Posted in IM NewsComments Off

How to Transform from Fan to Fanatic to Fantastic Content Creator

Building an audience involves a lot of trial and error. But those who wish to have their own audiences make…

The post How to Transform from Fan to Fanatic to Fantastic Content Creator appeared first on Copyblogger.


Copyblogger

Posted in IM NewsComments Off

Advert