The company I work for plans releases on a quarterly basis. So 4 times a year, we're all-hands-on-deck for a couple days of meetings. We meet as one large team to discuss the main focus, split into smaller teams to work on the details and what we'll accomplish in each sprint, and then come back together to talk about inter-team dependencies.
This quarter, I get to delve into a section of our app that uses a language I didn't have the privilege of using even in college - C++ (they had already moved on to using Java to teach programming). Whenever I'm learning something new, I like to find a pile of good resources to dig into. After doing a little research for an evening, here's what I've got so far. If you've got your own great resources, please let me know - I'd love to check them out!
Beginner
C++ Succinctly, Michael McLaughlin (2014) The aim of this book is to leverage your existing C# knowledge in order to expand your skills. Whether you need to use C++ in an upcoming project, or simply want to learn a new language (or reacquaint yourself with it), this book will help you learn all of the fundamental pieces of C++ so you can begin writing your own C++ programs.
Learn C++ - LearnCpp.com Unlike many other sites and books, these tutorials don’t assume you have any prior programming experience. We’ll teach you everything you need to know as you progress, with lots of examples along the way. Whether you’re interested in learning C++ as a hobby or for professional development, you’re in the right place!
C++ Language, cplusplus.com These tutorials explain the C++ language from its basics up to the newest features introduced by C++11. Chapters have a practical orientation, with example programs in all sections to start practicing what is being explained right away.
C++ Tutorial, Tutorials Point This tutorial has been prepared for the beginners to help them understand the basic to advanced concepts related to C++. Before you start practicing with various types of examples given in this tutorial,we are making an assumption that you are already aware of the basics of computer program and computer programming language.
C++ Annotations, Frank B. Brokken (1994-present) This document offers an introduction to the C++ programming language. It is intended for knowledgeable users of C (or any other language using a C-like grammar, like Perl or Java) who would like to know more about, or make the transition to, C++.It is not a complete C/C++ handbook, as much of the C-background of C++ is not covered.
Interactive C++ Tutorial - learn-cpp.org Whether you are an experienced programmer or not, this website is intended for everyone who wishes to learn the C++ programming language. There is no need to download anything. Just click on the chapter you wish to begin from, and follow the instructions.
The C++ Language, Libraries, Tools, and Other Topics, Michael Adams This document, which consists of approximately 2500 lecture slides, offers a wealth of information on many topics relevant to programming in C++, including coverage of the C++ language itself, the C++ standard library and a variety of other libraries, numerous software tools, and an assortment of other programming-related topics. The coverage of the C++ language and standard library is current with the C++17 standard.
Intermediate
Modern C++ Programming Cookbook, Marius Bancila (May 2017) Over 100 recipes to help you overcome your difficulties with C++ programming and gain a deeper understanding of the working of modern C++
C++ Core Guidelines by Bjarne Stroustrup and Herb Sutter (2015-present) The aim of this document is to help people to use modern C++ effectively. By "modern C++" we mean effective use of the ISO C++ standard (currently C++17, but almost all of our recommendations also apply to C++14 and C++11). The guidelines are focused on relatively high-level issues, such as interfaces, resource management, memory management, and concurrency.
Advanced
The Boost C++ Libraries, Boris Schäling (2008-present) Because the Boost C++ Libraries are based on the standard, they are implemented using state-of-the-art C++. They enable you to boost your productivity as a C++ developer. Since the Boost libraries are based on, and extend, the standard, you should know the standard well. You should understand and be able to use containers, iterators, and algorithms, and ideally you should have heard of concepts such as RAII, function objects, and predicates. The better you know the standard, the more you will benefit from the Boost libraries.
Practical Guide to Bare Metal C++, Alex Robenko The primary intended audience of this document is professional C++ developers who want to understand bare metal development a little bit better, get to know how to use their favourite programming language in an embedded environment, and probably bring their C++ skills to an “expert” level.
Courses
If you're looking for a little more structure, check out these free mooc's, which are both estimated to take about 20 hours:
My goal one evening this weekend was to find out how to open the directory where Windows store apps are installed, so I could write a script to backup some files. Never figured that one out, but I did end up spending a couple hours reading through old PC Mag issues from 40 years ago. How'd I get there? The magic of the Internet.
Marketing was a huge part of the magazine, including corny ads like the guys who apparently shake hands everywhere they go. I can't figure out if they're locked in a passive-aggressive struggle to see who gives in first, or if their hands are just superglued together. And others are a bit.. cringeworthy.
One of the quainter parts was, for the first year, their illustrations of their readers "wish list" items. They stopped doing it after the first year unfortunately. It's interesting to see what the sore points were for the first users of the PC - often things we just take for granted now. 🐁
And who couldn't identify with them? All these years later, we still have our wish list items! Here's mine...
A USB adapter that can be inserted no matter how it's flipped. 🔌
An air-gap platform under laptops, so it doesn't toast your legs or block vents.
Security features like 2FA and bitlocker at the hardware level by default.
Software TOS that are shorter and easier to understand. 📜
What about you? What are your top wish list items?
Although they stopped printing paper issues 10 years ago, you can still catch them online at pcmag.com.
Once again the dev community is reminded that although we sometimes imagine we're building one well-founded layer upon another, reality can be a bit more... sand castle at high tide. Basket of eggs. House of cards. 🃏🃏🃏
Several years ago, a developer got strong-armed into renaming his npm module, so he took his ball and went home, leaving thousands of projects in a broken state. Last year, a popular npm module called event-stream was handed off to an unknown "volunteer", who ended up not having the best of intentions (understatement).
And a week ago, the rest-client gem was updated with malicious code that, among other things, called out to a random pastebin file and executed the contents. It was pulled in by a thousand projects, before it was yanked a couple days ago and replaced with a clean version. This time, it was a developer's RubyGems account that was hacked, which gave the hacker access to update the gem. It happens.
This'll make the circuit through the dev community for awhile, but it's not the first time it's happened, and certainly won't be the last. Is there anything we can do?
Reasonable Precautions
Problems like these will likely always be a problem, and although nothing's foolproof it'd be stupid to say there's nothing we can do. Here's two that come to mind that likely would've prevented the RubyGems issue from happening at all...
Secure your code (by securing your accounts)
2FA should be enforced, not an option to enable. If someone guessed that dev's RubyGems password, they would've been unlikely to gain access if he'd also had 2FA enabled. In fact, especially in light of the recent git ransom campaign that hit compromised accounts across several repo platforms, our team at work decided to require 2FA for the whole organization. Here's more ways to secure GitHub too.
I have a number of browser extensions and a package on NuGet, between which there's a couple thousand users who'd be affected if my accounts were hacked and malicious code uploaded. NuGet packages don't auto-update by default, but most browsers do. You can easily enable 2FA for your Chrome, Firefox, Microsoft, RubyGems, and myriad other accounts. Doing so doesn't just protect you, but anyone using your code too!
Visual Studio won't (afaik) update NuGet packages automatically during the build process, but build tools for other languages do.
In Erlang, for example, rebar3 provides several ways to specify which version to grab. Of all the following, specifying the exact commit (ref) you're interested in is the safest way to go. That commit represents a snapshot in time that won't change with further changes.
Similarly in Ruby, bundler's gemfile provides several ways to specify versions too. Of the following, the optimistic version constraint >=1.0 is the least secure. The pessimistic constraint ~>1.1 isn't much better. In fact, those thousand people affected by the rest-client hack could've had ~>1.6.12 and still been affected. If they knew they wanted that particular legacy version, they could have specified '1.6.12'.
No matter what language or build tool you're using, the best thing you could do is check out the source for a project you want to use so that you're reasonably sure it's doing what it's supposed to do, and then lock your project that depends on it to that specific version. Updating to a newer version should be a deliberate, conscientious action, not a roll of the dice.
Inspect updates to third-party dependencies
No one can expect a joe-regular browser user to inspect their extensions before updating them, even if there were a way to disable automatic updates. But we devs are paid to understand this stuff, and to protect the end-user from bad code. Luckily, Jussi Koljonen did just that when he noticed the compromised update in the Ruby gem the other day. Would you or I? Maybe, maybe not.
Following on the heels of targeting a single version of a dependency, when you do decide to target a newer version, it'd be a good idea to check out the differences. If it's a big change, it might not be reasonable to understand everything, but I think looking at a git diff most of us would see a new piece of code that's loading an external file and executing the contents.
Reasonable Solutions
None of the above are foolproof solutions.. just reasonable precautions to take. Even if you follow those and all the other advice you'll find online, there are no guarantees. 2FA won't save you if GitHub or RubyGems is hacked. Inspecting the code won't help if it's minified, obfuscated, or so complex that it's nearly impossible to decipher anyway.
When it comes to natural disasters, like tornadoes and earthquakes and hurricanes, no one talks about stopping them. You take precautions - board up windows, move to the center of a building, don't wave a golf club over your head in a storm. You can play it smart, but the reality is that you can't stop everything. Mitigate them. Lessen the damage. I think that's the same solution here.
Principle of Least Authority (POLA)
There's a concept, called the Principle of Least Authority (POLA), which we already use in browser extensions and mobile devices, but it hasn't been adopted everywhere, and even where it has it hasn't necessarily been implemented well. Basically, if rest-client didn't have a reason to retrieve and execute remote files, the malicious code injected into it shouldn't have been able to do it either... at least, not without somehow prompting the consumer to allow more privileges, which likely would have raised red flags.
A suitable flaw in any piece of software, written by Microsoft or anyone else, can be used to grant an attacker all the privileges of the user. Reducing the number of such flaws will make finding points of attack harder, but once one is found, it is likely to be exploited. The mistake is in asking ”How can we prevent attacks?” when we should be asking ”How can we limit the damage that can be done when an attack succeeds?”. The former assumes infallibility; the latter recognizes that building systems is a human process.
Then check out POLA Would Have Prevented the Event-Stream Incident. The comments are worth reading too. It's the first time I've heard the term POLA, even though I've used it before. I never considered how it could be extended to the apps we develop, to the OS's we use, etc. Now I want to investigate some of the things Alan mentions in his talk, like the E programming language and a virus safe (not necessarily virus-free!) computing environment.
At the end of the day, it's a shame we have to jump through these hoops at all. It's not enough to have a curiosity of how things work - it needs to be focused correctly. Some people create things, because creating is fulfilling. Others destroy things, just because they can.
One of the proudest innovations in the world of law has to be the modern "terms of service". A combination of cover your arse and the customer comes last, it frequently seems like a contest of the longestTOSwins.
Most of us just find it easier to believe a company wouldn't do anything that bad - at least until we stumble on one that's particularly insane, and then we're like:
Without further ado, here's a few I found that seem especially silly. Enjoy! 😁
Using our Platform involves meeting real people and doing real things in the real world, which can sometimes lead to unexpected situations. We can’t control what happens in the real world, and we are not responsible for it. You should use common sense and good judgment when interacting with others.
You understand that when using the Service, you will be exposed to Content from a variety of sources, and that YouTube is not responsible for the accuracy, usefulness, safety, or intellectual property rights of or relating to such Content.
Third parties are authorized to include active links on Web sites they control to direct a browser to Allstate's "home page" at https://www.allstate.com. However, third parties may not include on their Web sites links to any other page maintained on the allstate.com Web site unless they have received prior written permission of an officer of Allstate or unless the Linking Site is an Internet search engine.
In the event of a system failure or interruption, including but not limited to acts of god, your data may be lost or destroyed. Any transactions that you initiated, were in the process of completing, or completed before a system failure or interruption should be verified by you through means other than online to ensure the accuracy and completeness of those transactions. You assume the risk of loss of your data during any system failure or interruption and the responsibility to verify the accuracy and completeness of any transactions so affected.
Whenever you use the online services, you must obey the rules of the road and all applicable rules and regulations. You must not use the online services while driving or while behind the wheel or controls of a vehicle that is moving or not in “park”.
If you drop a call in your Coverage Area, redial. If it's answered within 5 minutes, call us within 90 days if you're a Postpay customer, or within 45 days if you're a Prepaid customer, and we'll give you a 1–minute airtime credit.
Protecting people's privacy is central to how we've designed our ad system.
Having a job where you're in a race to simultaneously limit customer's rights while increasing a company's rights, both to the fullest extent allowed by the law, must be a strange experience indeed.
One site trying to make TOS easier to understand is Terms of Service; Didn't Read(GitHub). Who knows, maybe they'll bring some order to the chaos.
My wife and I are in the process of updating our wills, which we haven't touched in about 5 years. Life is more complicated now, especially since we have more kids, but it's all fairly boilerplate... who has finance and healthcare powers of attorney if you're incapacitated, who gets your assets and kids if the worst happens, yadda yadda. You can do one in an hour online, easy-peasy.
If you wanna go the extra mile though, put together a legacy drawer - a single place that contains all the important documents your family needs to know about, in a place that's easily accessible. Mine's an expanding file folder in the closet, with dividers for online accounts, passwords, tax returns, etc.
And as I put it together, I got to thinking... what happens to our online presence once we're gone? I'm not just talking about logins for department stores and the pizza place around the corner... what about those places we've contributed time and talent, where people might want to reach out to us with questions or to start a conversation? Except they don't realize we're already gone. 😬
DenverCoder9 ain't coming back stick man... RIP 😭 (xkcd)
Is anyone even asking the question?
It wasn't a question in Quicken Wills five years ago; it's not a question in Mama Bear Legal Forms now; it's not something I hear anyone talk about. But it's something to consider, isn't it?
Will my online presence fade into disuse, as my contributions become irrelevant?
Can a Power of Attorney / Executor of an estate update my online profiles?
Is it legal for a family member to update Facebook or pin a message in Twitter, assuming I've provided them with my passwords?
What happens with my blog posts, and contributions to forums and Q&A sites, especially popular ones that garner attention for a long time?
It happens now and we rarely notice it... but a century from now, the Internet could be littered with dead (in every sense) posts and articles, assuming the sites they're posted too don't disappear like Geocities did. So what do we do with our online presence?
How the big tech players are handling it
Some of the biggest (and oldest) names in tech have been thinking about this, but their solutions are all over the place. Some, like FB and Google, are more sophisticated than others.
Facebook lets you appoint a legacy contact, who can "memorialize" your account with a pinned message or have it permanently deleted.
Twitter will deactivate an account, but requires IDs and either a death certificate or Power of Attorney.
Google provides an inactive account manager, allowing you to specify how many months of inactivity signals that you've died, who to notify and what to give them access to, or whether you just want the data deleted.
Yahoo will also close an account, after you provide a death certificate and proof that you're the "personal representative or executor of the estate".
How the legal system is handling it
The legal system has been dealing with this too - I found references going back to the 80s dealing with email. In the last few years, most states have adopted The Revised Uniform Fiduciary Access to Digital Assets Act (RUFADAA), a law for advising executors and tech companies on how to request and provide access to someone's digital life.
Hopefully it avoids headaches like this one with Yahoo. Yahoo still claims they "cannot provide passwords or allow access to the deceased's account, including account content such as email. Pursuant to the Terms (of service), neither the Yahoo account nor any of the content therein are transferable, even when the account owner is deceased." Not sure if RUFADAA trumps a company's TOS or not.
How technology is making it easier
I really think a legacy drawer like I mentioned earlier is the way to go, although RUFADAA is a step in the right direction. Give your executor access to what they'll need, with explicit instructions on what exactly you want to have happen to your accounts. Make them your legacy contact and request they memorialize your Facebook profile. Ask them to deactivate Twitter, update your status on various forums, download your photos on Flickr, etc.
If the idea of a folder in your closet seems too old school, there are some high tech solutions popping up that look very promising.
SecureSafe
SecureSafe is a digital vault for storing all kinds of data, and provides a service called data inheritance that allows authorized individuals to access your data under special conditions. Their pro plan for $1.50 /month gives you access to unlimited passwords and 2FA, and 1 GB is plenty of space to store what you need to share. If you had a ton of large documents or photos to pass on, you could upgrade to larger plans for $4 or $12. They have really well rated apps for apple and android too.
AfterVault
AfterVault is another digital vault service, allowing you to store types of data (wills, insurance docs, funeral arrangements, etc) in buckets called vaults. They've got a sophisticated process in place, which is configurable, to determine when you might have died and connect the right people with your various vaults. For $10 a month you get 100 GB of space.. more than enough for all your important docs and photos.
1Password
I use 1Password for $5 /month to share passwords between family members, which I've been completely happy with. Plus, it lends itself to the legacy drawer concept I mentioned earlier, even though it's not marketed that way.
They allow you to upload much more than passwords, including 1 GB of documents, ID and credit cards, and much more.
You can add more users for $1 /month, so even if the people on your family plan aren't the people who will handle your estate, you could create one or more vaults with the stuff you really need to share and assign access to those who need them.
So is there an easy, one-size-fits-all answer?
Not really. No matter what solution you choose, there's no easy answer right now for that fits every situation. So what should you do?
Think about which accounts you'd like someone to handle for you.
Leave a legacy drawer (paper, in the cloud, something) with a list of passwords and instructions on how to access those accounts.
Leave notes on what exactly you'd like them to do with those accounts, like deleting your LinkedIn account, memorializing you on Facebook, updating your bio on forums, etc.
And if you're looking for more, check out A Plan for Your Digital Legacy, written by an attorney at Nolo. It's chock full of useful advice on what a digital legacy is and how to make a plan for the various types of accounts.
I love everything about woodworking - the mental challenge of a design, the smell of freshly cut wood, the gorgeous finish of a dark rich stain. It's a great feeling to turn a pile of lumber into a finished product, to end of up with something you can actually touch and feel and are proud to show off to friends. 😎
This summer I worked on something that, while not necessarily more complicated, had higher stakes than anything I've made before. I remodeled the dining room for my wife's birthday, including a new dining room table that could sit 8 comfortably.. and 10 in a pinch.
It was my first time working with oak, which is less forgiving (and a lot more expensive!) than pine. And so I designed more, measured more, and found a lumberyard where I could speak to some pros. I'm glad I did, because I got some great advice and a nice tour to boot.
A nearby lumberyard, with hundreds of custom-made templates for trim and molding.
When I dropped by Terry Lumber & Supply on a weekend back in June, one of the guys showed me around. What caught my eye was the custom moldings and trim. They come in countless variations, especially in older homes, so someone doing repairs or building an addition might need a custom-made template like the one I'm holding above. I was told those used to be made with a hand file, which took days or weeks to make! The green and blue plates in each cubby are other templates.
And then there's the machinery! Edgers with laser guides, saws that follow the edge of a template to cut out new trim, and planers with a half-dozen blades to smooth a board in a single pass. Even the network of exhaust fans routing sawdust from all the machines to an outbuilding was amazing.
There's no shortage of fun and expensive tools for woodworking!
I ordered about $500 worth of 5-quarter lumber. You ever have that moment right before starting a new project, where you hesitate? Like you're standing on the edge of a cliff with the water below, and you say to yourself that now might be a good time to turn back? Yeah, me neither. 😅
Needless to say, I drew up a lot of designs, a few of which are below. I knew if I could pull this off, I'd be doing it for a fraction of what it costs to buy one, but that didn't mean I wanted to waste a hundred bucks messing up good oak.
Plan carefully. Go slowly. Measure 10x, cut once and all that. 😉
There's a lot of tools that come in really handy for these kinds of projects - T-Squares, a variety of saws (miter, jig, circular, etc), levels, etc - but there are two I'd recommend above all others.
Kreg Jig If you want to connect two boards and hide the holes underneath or behind what you're making, you'll want to learn how to make pocket holes; you can see them in the images above. The easiest way to makea pocket hole is with Kreg tools.
Kreg accessories are sold everywhere, and their driver bits, drill bits, and pocket-hole screws aren't anymore expensive than any other tool. They're high quality too. I used a single drill bit to make a floor-to-ceiling bookshelf, two bunkbeds, and a train table before it broke... and I think it's because I was drilling in oak and pushing harder than I should've. Live and learn.
Clamps Seriously, each one is like an extra hand. At times on this project I was using a half-dozen of them. I snag them at garage sales, store sales, all kinds of sales... whenever I can.
Yo dawg, I put some clamps on your clamps so you can clamp like a champ.
I was a little worried when I assembled the end pieces that are perpendicular to the rest of the table, since things were obviously not perfectly straight. Adding a frame underneath the table took care of that, pulling things back into shape. When I attached the legs they tended to wobble a bit, but a couple 45° braces took care of that. With good supports in place, and considering it's 1" thick oak, 10 people could dance on top of it without it breaking, let alone eating at it. 😁
Bonus.. the odds & ends and scraps make for fun building blocks.
This is the step I really love. Everything leading up to staining a piece is mostly function, but this is totally form. Stain is easy to apply, and it transforms a nice looking piece into something amazing.
Adding the finishing touches.
And finally, the finished product! The final size is about 8' x 3.75', which fits our room and family exactly. The cost of the wood, paint, and stain totaled about $600 or so, which is easily half to a quarter of what you could buy one for.
Voila!
All in all, it was a good challenge and I'm proud of it, although afterwards I felt I needed a break for a bit... but that's like anything you pour all your free time into for awhile. I'm sure I'll get into another project sooner rather than later. 😏
As it turns out, there were a number of interesting articles, which seemed to have disappeared except for archived versions in the Wayback Machine. So I thought I'd post one of them for posterity.
Briefly, it's about how Merrill Lynch, back in the 90s, went from a gargantuan text-driven app that only a vim user could love, to something that wasn't (as they put it) a "usability nightmare". To do it, they seemed to have been given a wide leeway for creating and testing designs, getting feedback, and iterating quickly. The shell they were replacing was essentially the text version of this:
It's interesting to read about their first iterations, where they ended up with multiple groupings of tabs that in theory put all the useful info at the users' fingertips, and yet it wasn't very intuitive either. And what they ended up with.. well, I'll let you read it. But I've gotta say it's impressive how polished it looks for over 20 years old, and how their "bookshelf" metaphor lives on in many desktop apps even today. It says something about designing in a way that mimics something we're already intimately familiar with.
For example, in Outlook the "books" are email and calendar, the "chapters" are categories of email, and the "pages" are individual emails. Outlook keeps things listed on the right no matter which email (page) you're turned to - things that you always want visible like upcoming events and tasks. Ironically, I really like the color scheme ML added to their app, as well as the logic behind it, something Microsoft is moving away from that has really harmed usability. Granted, ML is hardly the only influence, but everything in the past has led to the present, and I'm sure they influenced their share of designers and programmers...
Without further ado...
Real World Design in the Corporate Environment: Designing an Interface for the Technically Challenged
Susan Hopper Merrill Lynch User Interface Design 400 College Road East Princeton, NJ 08540 (609) 282-4793 email:Susan_Hopper@ml.com
Harold Hambrose Electronic Ink President 401 S. Second Street Suite 304 Philadelphia, PA email:Harold_Hambrose@ml.com
Paul Kanevsky Merrill Lynch Strategic Market Systems 400 College Road East Princeton, NJ 08540 (609) 282-5747 email:Paul_Kanevsky@ml.com
Abstract
The development of a graphical user interface for Merrill Lynch's Trusted Global Advisor (TGA) system is a major endeavor to bring enhanced information access and updated technology to the desktops of more than 15,000 financial consultants and industry professionals firmwide.
The TGA development team's goals and challenges are two-fold. The business goal is to create a comprehensive, integrated computing environment that is unique and would identify Merrill Lynch as the technology pioneer in the financial services industry.
The technological challenge included the design of a graphical user interface that could be easily learned and understood by all users in the Firm-the majority of which are PC illiterate. In order to have acceptance from the users, this new system has to appeal to both the first-time GUI user and mouse aficionados alike.
The system being replaced is a 3270, character-based mainframe system. The current network is a usability nightmare, but contains an enormous amount of valuable information that has to be maintained and transferred to the new system. In addition, the old system forces the user to know three-letter function codes (more than 300 in all) for any task they wants to accomplish.
Figure 1. PRISM - Current 3270 Market Data Retrieval System
The new platform design is utilizing the Microsoft Windows NT operating system running under a graphical user interface in a TCP/IP networking environment. The interface has to allow users to easily accomplish tasks in a minimal amount of time.
The challenge here is to make the technology as transparent to the user as possible. The team's directive: the user should only be concerned with what they need to do; how to do it on the new system should be obvious.
The team's challenges to date:
How do you design a comprehensive system that seamlessly integrates 350 separate applications into one easy to learn and use business tool?
How do you design an interface simple enough for even the most inexperienced of users, NONE of whom can afford to lose even one day of business for training?
How do you introduce user interface design and usability concepts to a huge development community and make these processes work in an area traditionally resistant to change?
I - The Shell: A History of the Design
When this project was originally started in 1993, there were many prototype iterations and opinions of what the "the Shell" should look like and how users would be expected to interact with it. Multiple shell designs were produced, none of which seemed to work. Accurate evaluations of unsuccessful iterations didn't take place, as qualified Human Factors Engineers and User Interface Designers were not involved in the project. The team seemed to be designing in a vacuum-allowing only a select (and often unqualified) few to critique the design.
Eventually, the team was reorganized to include professionals who had already created a very successful shell-based design in another area of the company. This earlier product was created for a smaller-scale system, dubbed "CICERO." The user base was familiar with Windows, and CICERO was to replace numerous off-the-shelf products that were being used to accomplish the user's tasks. No formal usability tests were done, however, informal testing provided positive feedback as to the ease-of-use of the CICERO system. With the infusion of much needed expertise and guidance, the TGA project was refocused and put on the right track.
Figure 2. CICERO - The latest version of CICERO that was adopted for TGA. Drag and drop targets resided at the top of the screen.
The Shell needed to accomplish several things-it needed to be extremely powerful, in that all utilities and access to all information should be easy to find. Nothing could be buried. The time-on-task for a user was critical. This alone created a design challenge. How crowded could a 1024 x 768 screen be? We didn't want to overwhelm the user, but we wanted everything within easy reach.
The Shell design was based on the CICERO tab metaphor (shown above), whose Windows users found extremely easy to understand. Navigation to information was achieved through the use of up to three levels of tabs. The tabs were located clockwise around the application window. The primary level tabs were across the top of the space, the secondary tabs were positioned vertically on the right side of the space, and the tertiary tabs were located underneath the application window. Access to information was achieved by clicking on a top level tab, then on active secondary choices and tertiary tabs if necessary. By utilizing the tab metaphor from CICERO in the TGA shell design-segmenting the applications in a hierarchical format with which the users were familiar, we hoped to lessen the learning curve.
In addition to the application space, the TGA shell also needed to display real-time data at all times, such as stock quotes and other market information. These would be viewed in the InfoCenter, a panel located on the left-hand side of the application space. Users also indicated that they wanted to view company video broadcasts and TV news channels in the InfoCenter. A common set of functions across all applications such as print, fax, e-mail, on-line help and tutorials are also provided by the Shell via a set of buttons in an area known as the Action Bar.
II - Design Iterations
Once the CICERO model was adopted, we needed to refine it for the new user base. We created several versions, each iteration was based on a new requirement from management or users.
Initially the outgoing utilities such as print and fax were located at the top right, to keep them in view, however, we discovered through our own use that it was awkward to drag upwards, so they were moved to the lower right. Since our users need to be aware of many things at once, not all of equal importance, we developed a mechanism to alert users to incoming messages or alerts that they need to view or take action upon such as an opinion change or an incoming e-mail message. This message center consisted of buttons which appeared to have a light indicating the urgency of the alert. Red was critical, yellow was not critical but that there was a message or alert that was present, green indicated empty or no alert or message was present. We soon discovered that these colored buttons became distracting and were removed. This facility was replaced by a scrolling window which the user could configure to view only the alerts they were most concerned with.
As time passed, Shell real estate became quite valuable. The area known as the Device Bar was expanded to include Calendars, clocks, intelligent messengers and Go To and Return To controls.
Also our tastes had changed. We now preferred a flatter, sleeker look for the Shell. We maintained three-dimensionality, but lowered the height of buttons and frames-giving us a little more visual breathing room.
Behind the Screens
To achieve a truly integrated computing environment, all applications developed internally or purchased (e.g., Microsoft Office) are subordinated and controlled by the shell. In effect, they become pages in the TGA book, and cannot be accessed as "stand-alone." The uncluttered layout and shell-managed display avoids the confusion caused by the floating or overlapped applications, while experienced Windows users are not hindered in the performance of their daily tasks.
Several methods were used to achieve this goal. All applications reside within the application space in the shell. The shell manages the process of showing and hiding application screens, and does not allow any portion of an application to become obscured. Intuitive bookmarking and navigational methods are implemented to give the user the feel of continuity and integration within the shell.
To reinforce this feeling of continuity, the shell provides a context management and sharing mechanism. Using context management, the user can pick the focus of interest once for various applications. For example, a user can select John Smith as the client context, and then navigate to the client profile application. With the same client in focus, the user can then switch to the asset allocation application to see John Smith's holdings. In the background, the shell delivers all the necessary information to each application as soon as a new client is selected. The context sharing can be as simple as passing an account number, or as complex as passing multiple database records from application to application.
The bookmarking capability in the shell simplifies the interrupt-driven day of a Financial Consultant. Using the same example, while the user is viewing John Smith's information his telephone rings. It's Mike Doe and he wants to buy a stock. The user clicks on the order book and places the order. To return to the unfinished task, the user simply clicks on the Client book and his other client and his profile are presented at exactly the point he left them.
III - A Cry for Standards (or Putting the Cart Before the Horse) : The Design Team is Formed
With the Shell design firmly entrenched, we set out on our adventure to bring the Shell to the development community. True to corporate style, a committee was formed to quell the serious demand for some sort of user interface standards. We had no clear-cut objectives other than to go into seclusion and miraculously come out with UI standards for the Shell and as-yet-undeveloped applications.
The Design Team members represented the areas of technology, business, design and human factors. Each discipline was represented by at least two individuals.
After several months of brainstorming and brain-bashing, we came out with a set of half-baked, fairly useless standards. It didn't go beyond the UI guidelines already available from software manufacturers and industry experts, with a few exceptions. We created groups of controls such as the Finder, which handled context management. Although the standards reiterated what most GUI developers already knew, we recognized that a large portion of our development community had never been exposed to GUI development before (they were, until recently, COBOL mainframe programmers) and that we needed to educate them on the various controls. Although they could have read the guides, we wanted to create one comprehensive document that they could refer to for anything.
This attempt failed. Standards spoke of what a control was, but offered little assistance in how a control was to be used. All applications posed unique design challenges and the Standards document proved to be a poor cookbook for applications design. We were immediately sent back to try again by senior management. Realizing that the composition of this team was insufficient, the original team was reorganized to include new members with UI design skills and human factors experience. Still, we ran into the concern that we were putting the cart before the horse. But the demand grew and we complied. After several more weeks, a Standards document was published. It included more corporate standards to promote Merrill's look and feel such as color usage standards, fonts, spreadsheet controls, group box controls, "more detail" controls and language. It was a starting point that would assist developers in fulfilling their directive to create applications that had the same look and feel.
Even though the Design Team felt as if we were not completely comfortable with the standards we published, the exercise was not useless. We found that we needed to amend the Standards Guide or publish Updates on a relatively frequent basis. This not only aggravated the developers, but potentially pushed back their delivery dates.
When interface design began in earnest, the Standards could be amended. Now came the hard part-getting the standards to fit into the interface design of the applications-or vice versa depending on the developer or their manager.
IV - Into the Trenches
Upon general distribution of the first draft of the Design Standards and Guidelines, the Design Team re-evaluated its position in the development environment. Up to this point, the team had been a decision-making body responsible for the creation of a list of recommendations for the design of the TGA interface. The team had to now mobilize in order to affect change in an environment that had already progressed in the development effort without the benefit of interface design standards and guidelines.
In order to efficiently affect the most change within the many application groups developing end-user interfaces, the Design Team decided that it would be advantageous to divide and conquer. Sub-teams were formed from the members of the design team and assigned to various application development projects. These sub-teams consisted of four members-one member from each of the disciplines that comprised the original committee. There was a sub-team for each of the major development efforts, such as Client applications, Product applications, Research, Business Management, etc.. The design team as a whole would remain intact, but to service the development groups as smaller, more dynamic teams seemed to work best.
The sub-teams met on an as-needed basis, more in a consultative fashion at first. Usually the UI designers would venture out first to assist developers with standards questions and then UI design. Eventually an informal review of the application was scheduled with the whole sub-team. A document was to be produced to assist the developers in making any necessary changes, however, this seemed to infer a more formal approach and the document was abandoned. Verbal communication seemed to be enough. The developers also had input into the design of controls or variations in layout to accommodate any special needs that may have.
Cross Application Reviews were developed to assure that any function that may be common to other applications was being done consistently. These reviews were done by the entire Design Team. This was in addition to any formal reviews that were scheduled with the sub-teams.
The Up Side
With smaller, more accessible teams, developers were now encouraged to utilize these individuals as a resource for the definition of their user interface-not simply as a governing body bestowing a blessing (or not) on their UI design efforts. Individuals with a particular skill were now seen by developers as an aid-not a review checkpoint. Members of these smaller teams could now communicate rapidly to the whole design committee those decisions made while working with developers. Communication between the individual developers and the design team (as well as among the design team itself) was improved.
Because of the physical size of these sub-teams, members were able to sit in the developer's cubicle (their turf) to evaluate their user interfaces. This created a hands-on work session environment rather than the presentation-and-review scenario-the alternative with a large design committee. Developers no longer felt as if they were "going before the board" and interaction between them and the Design Team flourished. And since UI designers were able to assist any team (not just the sub-team to which they belonged) the sub-team's evaluations usually went more smoothly.
The Down Side
As a large committee, it was easy for individual members to leave certain areas of the guidelines and standards decision-making to others. Within smaller teams, however, it was critical for all individuals to be well versed in all design decisions documented in the Standards and Guidelines document and why they were made. It was important too, for sub-team members to support the design decisions recorded in the document, even though they may have disagreed when the decision was originally made. When members of the team responsible for a document contradict the statements recorded in the Standards document, this makes the team seem fragmented and unorganized. It is important for team members to voice disagreements and concerns about the documented guidelines and standards to the appropriate audience, but present a unified front.
Once out in the trenches in sub-teams, it was not difficult to forget the larger design committee to which we all belonged. It was critical to the survival of the Standards and Guidelines document (as well as the team), that design decisions and challenges that arose from sub-team work be communicated to other design team members. Without ties to the original team, we risked becoming as fragmented a development and design effort as the one that necessitated our creation.
It also became apparent that there weren't enough UI designers and Human Factors specialists to go around, so some applications fell through the cracks. They were discovered during formal Design Team Reviews and Cross Application Reviews. We are currently seeking to either streamline the process, or acquire more resources
V - Battle of Wills: Process, Standards and Applications Design
Since all of the applications were to run under the shell, we needed the appearance of one big, consistent application. This presented another design challenge-introducing developers to the concept of user interface designers and usability engineers. Our entire development team for this project is more than 400 people. Although developers were familiar with a process, these new extra steps caused some anxiety among the community. Standards were welcomed; however, when it came time to follow a process, (which a project of this magnitude truly needs), there were some differences of opinion as to which process to follow and what to do when. As a result, no consistent process was followed for the most part.
As in most corporate environments, time is of the essence. Since no formal UI Design step was in place in the development lifecycle, developers forged ahead in order to meet their deadlines. And there were many of them-Beta 1, 2, and 3, Pilot, Release. Applications were developed rapidly, with little regard for usability. Usability Testing, however, was a formal step in the process. Most developers would go in on their scheduled usability lab day and promptly watch their application fail. A usability report was delivered and most changes would be made. Then the developers would parade in to usability and watch their application fail again. UI Design clearly needed to be a formal step in the process and the team campaigned for this. To date, it is now part of the processp;however, the process is still being fine-tuned.
VI - A Pain in the Neck: Usability Testing Round One- an example
Now, we had our shell and some redesigned applications. We were ready to conduct our first formal Usability testing series. We brought in Financial Consultants from all over the country with different backgrounds and levels of PC experience. Our users were divided into three major groups, Administrative Assistants, Field Champions and FCSAC (Financial Consultants Systems Advisory Council).
Every user that was tested tilted their heads to the right to read the side or secondary tabs. We thought it might present a problem the first or second time through, but this problem persisted, even though the users were now familiar with the tab structure. We assumed the problem stemmed from the readability problems associated with rotating text vertically, since some letters would blend into each other and other letters would kern apart. We tried changing the font, the size, the color, we even anti-aliased the text-still the users tilted their heads and proclaimed they couldn't read the tabs. In addition, if the users found the correct side tab (after tilting their head), they missed the tertiary tabs because they were positioned at the bottom of the application area.
Another problem was that users didn't seem to understand the tab metaphor. The tab, tab, tab metaphor seemed alien to these users, they didn't understand the hierarchical structure. We also learned that part of the problem was one of language-which would have to be addressed separately. What we thought a tab should be called was foreign to the end user. So the business partners, who act as liaison to the end users, went out to talk to the Financial Consultants and their Administrative Assistants. Paper prototyping sessions were conducted to determine what the tabs should be named and what tasks would reside under them. Finally, users were influencing their product's form.
A secondary, but nagging complaint from users was the lack of color. After using PRISM for so long, they became accustomed to looking at a bright screen with large chunks or primary colors (blue, red, yellow and gray). TGA looked "depressing" and "too gray" to them. We had made a conscious effort to keep color to a minimum simply to not distract users and to use color only if was truly meaningful not just for aesthetic reasons. An understated and unobtrusive (not to mention not-blinding) interface was our goal. But it seemed our effort went unappreciated. They screamed for bright yellows and reds, greens and blues. Clearly we needed to make a compromise. So we went back to work.
VII - Back to the Drawing Board...Again
After careful consideration of all usability issues, it was decided that we needed to take the design apart and strengthen the metaphor.
The interface was redesigned and the metaphor of a book shelf was introduced. The former four-level tab hierarchy was now organized and presented on the screen as Books, Tabs, Chapters and Pages. The main benefit of this metaphor is that it hides the complexities of managing multiple applications from the user, while presenting a uniformly logical view of the data. For a novice user, opening a book is a simple as a single mouse click on the cover of the book. Flipping pages is just as simple, with a single click on the desired tab. The hierarchical organization, its dependencies and relationships are clear to the user through this presentation on the screen. For an advanced user, the shell provides short-cut keys to reduce the number of keystrokes and mouse clicks. To aid a novice user, the shell makes all of the transitions smoother and more apparent through extensive use of animation and sound effects.
The InfoCenter was moved to the right-hand side. Though this information was important, users felt it to be secondary to their task at hand. This put focus on the application area, where most of the work is performed.
Figure 3. Current TGA - Latest version of TGA. 1. Books located on the left are primary means of navigation, highest level of hierarchy. 2. Tabs and chapters. 3. Application space. 4. InfoCenter. 5. Pop up utilities, such as Help, Snap Quote and Calculators. 6. Action Bar7. Go To and Return To. 8. Predefined application buttons.
The top-level tabs were replaced by the books which were placed vertically down the left side of the application area. They were color-coded, each book a different color. The secondary tabs, colored to coordinate with the active book, was moved to the top and the tertiary tabs were replaced by a tab-like control, called chapters, which also indicated if there was a fourth-level selection by an embossed down arrow. By placing all of these navigational controls closer together, we reduced mouse and eye movement greatly. The book, tab, chapter, page metaphor was something all the users could relate to. They could touch it and understand the underlying hierarchy-thereby reducing learning curve.
We increased the use of color in this design. Because we corresponded the color of the books to the associated tabs, at just a glance, users could quickly identify which book they were working in. Additionally we added color to the application space by using colored bars for group boxes and colored triangles for detail indicators, lessening the feeling of "grayness."
Usability Testing Round 2
We started with paper prototypes, taking them to different groups within the Firm. We showed user the old design and compared various versions of the new shell. This time we got a more positive reaction. They commented that the books looked more user-friendly, they were easier on the eye-"no more tilting your head." They liked the color coding of the books and they felt the interface itself was more pleasant to look at. So we proceeded with an electronic prototype. After a few weeks of tweaking the design, it was ready for another round of testing.
Again, we got positive results. Users no longer needed to tilt their heads, it was much easier to read, and they understood the hierarchy. Tertiary selections were easily located, and fourth-level functionality seemed to present no problems.
There were still a few issues to resolve, however. Readability was still clearly an issue. Users, having been used to reading 12 point Courier in all caps, found our 9 point Arial
Where We Are Now
At the time of this writing, we are about halfway through our development effort. The shell and its applications are in the final revision stages, in preparation for the first round of Beta testing, which is looming on our calendars. Beta versions are going out in three offices (1 per month for 3 months). Then taking this feedback with us we will pilot TGA in select offices. Eventually a full rollout will commence in late 1996.
Conclusions
The ever-evolving design process on this project proved to be the biggest learning experience for most members of the team. When TGA was first conceived, this appeared to be a monumental project-one that would never see the light of a computer monitor. Seeing it take shape and was one of the most rewarding aspects of being on this project.
A few lessons learned ingrained in all future process: Prototypes come in all shapes and sizes. When you're still trying out concepts, use prototypes that don't require code. Use paper or multimedia tools to demonstrate your concepts.
We found that the shell was the furthest along in its development cycle (even though it had the fewest business requirements) because we continually prototyped with either still pictures (as in Photoshop) or scripting tools (Director). We could tweak and test until it was right, then we could build it. It doesn't have to "work" to get your ideas across. This way you can quickly mock up a conceptual model without worrying about coding, debugging and data.
Putting together a multidisciplinary Design Team was also an important factor. Once your objectives are clear, each member can contribute something valuable from their own perspective. It provides a more multidimensional design.
Recognize the iterative nature of the UI development process. Design solutions are seldom found at the start of any development project.
Another important lesson, especially in the corporate environment, is compromise. Deadlines are mightier than the standard, or usability, or UI design. Accept incremental changes. Developers want to build the best system they can and if they accept you as part of the team, there's no better partnership. When in a team that is responsible for making decisions that affect many development areas, always defer to the person with the expertise (not the highest ranking member) when no one agrees. This is the whole point of a multidisciplanry team.
Mobilizing a larger entity such as the Design Team into the trenches can prove to gain many allies, especially if there is initial resistance to the design process. Working sessions are much more productive than being viewed as a sign-off checkpoint.
Lessons learned from the development community: to never turn away a user interface designer who wants to help. Working in a vacuum is not user-centered design, and it can be a lot of time wasted going in the wrong direction. User Interface designers and human factors specialists should get in the process as early as possible. There's nothing less productive than telling developers that "their baby is ugly" two thirds of the way into the development effort.
Acknowledgements
The work presented in this design briefing represents the collaborative work of many talented, committed and hard-working individuals, inspiring visionaries and brave leaders. These people include Ritch Gaiti,Andy Williams, Tony Pizi, Paul Kanevsky, the members of the Design Team: Alan Amira, Laura Flannery, Phil Gilligan, Betty Greenberg, Paul Ilechko, Pat McAleavy, Janine Purcell, Nicole Speigel, Maury Weinberg, and Christine Zafiris. Members of the development team: Ashe Vashtare, Chris Cobb, Doug Breuninger, Rob Sterlacci, Pam Smith, and numerous others who made this project possible.
Once upon a time, in the glory days of Geocities, Angelfire, Tripod et al, we were in a race to the bottom, proudly outdoing each other with animated gifs, scrolling marquees, under construction signs, and midi files. Some of us never left.
It was okay at the time - 20 years ago, the Internet was stretching its legs and testing the edges. A few million people, with incredibly ugly websites, sharing whatever was on their minds. But it got annoying pretty quickly.
While most of the problems disappeared over time, new ones arose to replace them. Today there are well over a billion sites, and the name of the game now is attention... do anything you can to grab it and keep it.
But those aren't the only problems.. here's my top 7.
Autoplay Videos
Imagine doing a search, opening a website that looks interesting, and they've put together a really great video. You want to hear that video, loud and proud, right away. But then you have to go through the extra work of clicking "play". 🙄
Do your visitors a favor. Don't make your videos autoplay. They're sitting at work, in the library, a quiet coffee shop, and your site blasting a video through their speakers unexpectedly isn't just annoying - it's freaking embarrassing, and they'll be gone instantaneously and possibly for good. It's an attention-grabber, but it's not the attention you want.
Worse yet, videos can take awhile to load, so your visitors also get a near heart-attack as they rapidly play whack-a-tab trying to find which one is playing audio. They intentionally came to your site - let them intentionally start the media too, on their terms.
Popup... Popups Everywhere
If you've taken the time to put together a newsletter, write a whitepaper, or create a survey, a signup box in the sidebar is a good place to start. It makes sense - they intentionally came to your site seeking something, so offer them more of that same something.
But it usually snowballs from there. A small slide-out box in the lower-right corner isn't horrible. Then there's the larger slide-outs from one side of the page or the other, or even huge banners on the front page. A request to push notifications slides out from the top.
Oh, and don't forget the full screen popup as you try to leave the page or you're midway down the page trying to read. Nothing is more attractive than a desperate "Hey where are you going? Huh HUH? Guys? Guys?!? Come baaaaaaack! 😭"
Abusing Notification Icons
We're primed to recognize red as an alert or notification.
Some sites use indicators to notify registered users of unread messages, and then (ab)use them to remind unregistered visitors that they don't yet have an account. I've fallen for it, unconsciously clicked on an icon, and I can tell you now the number of times I've registered on a site for that reason is between 1 and... -1.
Infinite Scrolling / Footer Combo
Infinite scrolling is a clever way to engage visitors, but it's awful when the footer is the place where links offering help, a contact page, site map, etc live. I love getting just the briefest of glimpse of what I need at the bottom of the site, but then before I can click on it, new posts load and everything is pushed out of reach again.
Almost... there...
What's a Mobile Device? (or pinch & zoom ftw)
More people than ever are browsing with mobile devices, yet some sites still don't optimize for them, such as using a "responsive" design. I don't know the intricacies of mobile design, but most themes for WordPress, Ghost, et al will do that for you. If you decide to go your own way and build a custom theme, it shouldn't force mobile users to pinch & zoom their way across your site. 🤮
Disabling Comments After xx Weeks
This is a weird one, but I've run it across it quite a few times. Maybe it's the default for some blogging engine and most bloggers forget to change it?
The Internet was intended as a place of collaboration and interaction, yet some sites disable comments after a few weeks (before the page is even discovered by anyone) or worse yet include no comments at all. If you're present elsewhere on the web, like Facebook, Twitter, or Instagram, don't bet on your visitors engaging you there. I don't even have an account on any of those.
I hadn't really noticed this one until I started writing this post, but here's one we've seemed to unlearn over time, which is a little weird I think. Years ago it was common for links to be blue, visited links to turn purple, and active (links you've clicked on or are hovering over) to turn red.
I realized the theme on my own site shows links in black, so I changed it to use navy and purple (and red on hover).
Poking around the web a bit, there seem to be a lot of sites that don't change color on visited, or use bold instead of underlining, which just gets confusing when unlinked text is also bold-faced or underlined. We should get back to making sure hyperlinks are visually (and consistently) separate from the rest of the content.
I'm working on a series of a posts about GraphQL in order to get more familiar with what it is and what it's capabilities are. I hope you find these useful too!
These APIs are often implemented using REST, the standard for making web resources accessible for over a decade. In fact, when you access a webpage, your browser performs a GET for the page and each resource it needs (images, style sheets, etc).
Requesting a page and an image in the page, via Postman and a web browser
There's a different way to access web resources, called GraphQL. The first time I heard about it was a few years ago in some article about Facebook, but I didn't pay much attention to it at the time. I didn't realize until recently that Facebook actually developed it - and a few years ago, open sourced it too.
What is GraphQL?
As much as we'd like to think there's one "best" way to do things, GraphQL is an alternative for REST, not a replacement. And while there's quite a few differences between them, the major difference is that GraphQL lets you build a query to get exactly (and only) the data you're interested in.
An API that implements the REST interface allows you to (for example) very easily GET some data about an entity - and by default, you get the whole shebang. If you can limit what you get, or customize the returned dataset somewhat, it's only because the developers wrote code to explicitly support those limits and customizations. How that looks will differ with every API... if it exists at all.
Take the Ghost API as an example, which is built into the blog engine I use for this site. You can request data on individual posts, authors, etc, which is all standard fare. On top of that though, the devs provided a few query parameters to affect the returned data:
include returns more data, like full author details for a post
fields returns less data, by specifying which fields should be returned
formats returns more data, by returning data in multiple formats
filter returns less data, by filtering by certain attributes
limit and pagereturn less data by implementing paging
order doesn't even filter data, but it affects paging results so it's tacked on
Each of those items had to be explicitly coded, and while the Ghost API offers more customization than a lot of other APIs I've seen, they can only offer so much. If I want to "filter" by an attribute they don't support, I have to request more data than I need and filter it out locally. If I want to "include" some other entity they didn't plan for, I have to make multiple requests and stitch things together client side.
The flexibility in GraphQL is that it allows (forces, really) a client to create their own query to get just the data they want, in just the way they want it. And it also provides certain tools to enable the server to provide that data and only that data. In otherwords, GraphQL out-of-the-box returns the smallest amount of data needed, whereas REST returns the largest.
I wrote this post to force myself to do a little digging into GraphQL. It might not be too helpful to anyone else yet, but I plan on writing a short series of posts that elaborate on everything here and present some examples. There seems to be a pretty rich toolset for GraphQL, including:
Some sort of IDE in the browser to play around with GraphQL
Server libraries in C#, Python, and more (what's a server library?)
GraphQL clients for C# and Python(what do they mean by client??)
I'll leave you with a couple introductory videos. The first one, by Scott Tolinski, is a nice quick overview that's less than 15 minutes.
The second is an hour long tutorial by Eve Porcello. She created a great introduction using GitHub's API, but it requires a Lynda.com account. I'd suggest checking with your library to see if they provide free access - mine did.
I'm working on a series of a posts about GraphQL in order to get more familiar with what it is and what it's capabilities are. I hope you find these useful too!
Next ➡ Part 3: Using GraphQL for .NET to access a GraphQL API
In the last post, I just wanted to understand what GraphQL is, and the justification for using it. What I learned is that it's about flexibility and efficiency; getting exactly what you need, in the format you need it. Now I want to look at an actual implementation.
Unfortunately, Facebook spent years developing an amazing tool (that's right, FB designed GraphQL), but have screwed up so badly and so often that anything they do has a stain on it. Fortunately, GraphQL is open-sourced and there's another tech giant that implemented it too, so we'll see how they did it.
When GitHub began moving to a new version of their API several years ago, they migrated from REST to GraphQL. Their reasoning is very similar to what I've read elsewhere and experienced myself.
Our responses were bloated and filled with all sorts of *_url hints in the JSON responses to help people continue to navigate through the API to get what they needed. Despite all the information we provided, we heard from integrators that our REST API also wasn’t very flexible. It sometimes required two or three separate calls to assemble a complete view of a resource. It seemed like our responses simultaneously sent too much data and didn’t include data that consumers needed.
Queries
One of the tools available with GraphQL is GraphiQL, which allows users of your API to design queries right in the browser and see immediate results. This is a tremendous time-saver!
With REST, I've always used Postman to manage my queries without having to have a fullblown app in place from the get-go, but it still involves trial and error. You have to read the API's docs, figure out what you want to call and how, get the result and inspect it, make adjustments, make more calls to other endpoints, blahdee blah blah blah. Occasionally, an API provider produces their own "API Explorer" of sorts, which lets you try calls right in the browser, but that cannot be easy to develop and maintain...
GraphiQL is a ready-to-go "API Explorer". It integrates API docs right into the experience with "typeaheads" (similar to intellisense in Visual Studio), which helps you figure out what to query and then shows you the results. Let's try out GitHub's API explorer.
Allow it to access your GitHub account... even though it is GitHub. 🤨
Click the "Execute Query" triangle in the upper-left to run the default query... info about you!
Click the "Docs" button on the right side to view the API documentation. Note the two root types - query and mutation. A query is similar to a REST GET, while mutation is similar to POST or DELETE. Stick with query for now.
As you drill down, you can see objects to query, parameters to restrict your queries, and other child objects. It's like you're getting to browse their database!
Here's a few queries I tried running:
The "Hello World!" of GraphQL queries...A query for my own repos' URLs, and the homepages of repos I've forkedMy bio, my followers, my followers' followers bios... why? Because I can. 😑
Mutations
Once you've run a few queries, try out mutations. You've already granted access to everything in your account to the tool, so you can update (mutate) pretty much anything in your account. Here's a short screen capture of me doing two things:
A query to get the ID associated with an open issue in one of my repos
A mutation to add a few reactions to the issue
Adding reactions to an open issue, via mutations
As with running the queries, having the documentation on the right side is great. I was able to drill down and see that addReaction requires an AddReactionInput type, which consists of three things - and only two are required (notice the ! ).
addReaction requires the ID of the reaction to modify, and the reaction type to add
The only thing that seemed unintuitive to me was the requirement to have a body in the mutation, as if it's required to return something even though if you were doing a REST POST you wouldn't care about anything except a return code of 200 OK.
Omitting the body, or having an empty set of braces, makes GraphiQL sad 😢
Next step?
I think the next thing I'd like to try is using one a tool like GraphQL for .NET to access the GitHub API from an application, instead of GraphiQL. 😎
Several years ago, a user on SO tried to add a reference to System.Core to his C# project manually and got an oddball error: "A reference to 'System.Core' could not be added. This component is already automatically referenced by the build system."
As it turns out, the "problem" was actually an intentional design choice. I was able to dig up the reasoning behind the error in a Microsoft Connect post, from a Microsoft employee. Then Microsoft killed off Connect and everything was lost. Well, some of it was archived by the Wayback Machine... but it's not easy to find anything on there, so here's the original thread (with a few formatting changes to aid readability).
It's a shame Microsoft didn't just archive the site, as historical information like this is worth keeping around...
Cannot remove System.Core.dll reference from a VS2010 project by Niranjan U
Closed as Won't Fix
Type:
Bug
ID:
525663
Opened:
1/19/2010 12:45:29 AM
Access Restriction:
Public
When I create a new VS2010 project (either a windows forms application or a class library), the project will add a reference to System.Core.dll. If I delete that reference from that project, the project still compiles fine even though there is 'Using System.Linq;' in my code. If I try to re-add the System.Core reference to my project it will throw an exception with this message: "A reference to 'System.Core' could not be added. This component is automatically referenced by the project system and cannot be referenced directly".
Product Language: English
Version: .NET Framework 4 Beta 2
Operating System: Windows Vista
Operating System Language: English
Steps to Reproduce
Create a new VS2010 project targeted for framework 4.0
Remove System.Core.dll from the references.
The project still builds properly even though there is 'Using System.Linq;' in the code.
Try to re-add the System.Core.dll assembly to the project reference and it will throw an exception with this message: "A reference to 'System.Core' could not be added. This component is automatically referenced by the project system and cannot be referenced directly".
Actual Results
An exception with this message: "A reference to 'System.Core' could not be added. This component is automatically referenced by the project system and cannot be referenced directly".
Expected Results
System.Core should not be added by default into the project system. If it is being added, then it should not allow the user to delete the reference to System.Core. So basically, it should not keep the user in the blind about System.Core being used internally.
Posted by Microsoft on 1/20/2010 at 3:44 AM
Thank you for your feedback, we are currently reviewing the issue you have submitted. If this issue is urgent, please contact support directly(http://support.microsoft.com)
Posted by Microsoft on 1/21/2010 at 1:29 AM
Thank you for reporting the issue. We were able to reproduce the issue you are seeing. We are routing this issue to the appropriate group within the Visual Studio Product Team for triage and resolution. These specialized experts will follow-up with your issue.
Posted by Microsoft on 1/25/2010 at 4:05 PM
Thanks for sending us this issue.
I agree that it would be great if we could create some UI that prevented you from being able to delete the reference to System.Core from your project.
There is a work-around to this, which involves cracking opend the project file and adding the reference back manually to the project file. You can find out how the reference should look by copying it from a project file that you create from scratch and inspect.
To edit your project file (or inspect the temp project you are pulling the reference from) you simply right click on the project file and choose "Unload Project" from the context menu. Then right click on the project again (which is now grayed out and unloaded) and choose "Edit {Project Name}" from the context menu. This will bring up the XML editor with your project file. When you are done looking at the project file an making changes, then you can right click on the project again and select "Reload Project" from the context menu. It will prompt you whether or not you want to save changes and/or close the XML file.
Unfortunatley, we are at a point in the cycle where we cannot change the UI. As such, I am resolving this bug as Postponed so that we can look at this again in a future release.
Thanks,
Chuck England Visual Studio Platform Program Manager - MSBuild
Posted by Niranjan U on 1/28/2010 at 11:29 PM
What about the behavior when I try to create an application targeted for .NET 3.5 using VS2010? I see that the same behavior is seen even then as well and I think this is incorrect. I think it should give the user the same experience as that of developing in VS2008. So it should not be using System.Core by default.
Posted by Microsoft on 2/19/2010 at 4:18 PM
The references that are used by projects are determined at build time. Certain references are automatically added and cannot be removed. If you remove them, we still automatically generate the appropriate reference. This is a requirement for Visual Studio 2010, and a change from Visual Studio 2008.
So, in this case, we would want to prevent you from removing the reference in the first place.
Since we just release the RC (Release Canidate), we are too late to make any changes of this size at this point, as it would require a large amount of testing, and could introduce regressions in the code. As such, we have postponed the bug so that we can take a look at this for a future release.
Chuck England Visual Studio Platform Program Manager - MSBuild
Posted by androidi on 2/20/2010 at 6:10 PM
I'd like to add that during beta 2 changed NET 4 project to 3.0 then 3.5. I had to manually delete and re-add some references since it didn't automatically work properly atleast back then.
Now I in RC came back to this project and change it back to net 4.0.. And I find I cannot compile after adding dynamic keyword use because System.Core is missing, and I cannot add System.Core back because :
A reference to 'System.Core' could not be added. This component is already automatically referenced by the build system.
Posted by Maximilian Haru Raditya on 4/26/2010 at 1:44 AM
I'm not actually bothered with inability to remove System.Core.dll reference. But I am bothered with inability to add back System.Core.dll if I accidentally removed it since it throws an error: "A reference to 'System.Core' could not be added. This component is automatically referenced by the project system and cannot be referenced directly".
I can add it back, but I had to manually edit the project file in a text editor/VS editor. This is not a convenience way.
I think VS should decide it this way, just bring back the old VS2008 way. I'm not why the design is changed. Any words?
Posted by nZeus on 11/9/2010 at 12:48 AM
Hello!
Without line <Reference Include="System.Core" /> msbuild gives me an error error CS0234: The type or namespace name 'Linq' does not exist in the namespace 'System' (are you missing an assembly reference?)
After i add line <Reference Include="System.Core" /> i got another error: error CS1061: 'System.Collections.Generic.List<string>' does not contain a definition for 'Select' and no extension method 'Select' accepting a first argument of type 'System.Collections.Generic.List<string>' could be found (are you missing a using directive or an assembly reference?)
I can't use linq at all! My project can't be build! How can i resolve my problem?
Posted by Chuck England - MSFT on 10/4/2011 at 9:03 AM
@androidi: It is unfortunate that when you attempt to remove the reference for System.Core that we don't do the right things. However, as the message you saw indicates, System.Core is implicitly referenced. So, the fact that you have removed it, other than physically removing a line from the project file, has not changed the build in any way. There are legitimate scenarios where you might want to be able to do this, but it is a very edgy corner case.
Adding the reference back is super simple if you really want it to be there. Right-click on the project and select Unload. Right-click on the project node again and selct Edit. In the editor, copy another reference line and paste it below the original reference inside the same ItemGroup. Change the reference name to "System.Core". Right-click on the project node and select Reload. Choose "yes" for question to save and reload.
VS2008 did not handle multi-targeting correctly. It would allow you to build things that were not legitimate. With VS2010, we have tried very hard to make sure that if it builds for the target framework, then it will run on the target framework. I can't say I know of any places where this is not true.
In this case, we build even though you removed a reference. You might say, but it should not have built. But, it will run on the targeted framework, and hence should have built. The fact that we have added a "required" refererence for you is somewhat of a broken experience in how it was developed.
Chuck England VS Pro PM
Posted by Chuck England - MSFT on 10/4/2011 at 9:12 AM
@Maximilian Haru Raditya: Yes, the experience is not great. Since System.Core is required, you should never remove it. We fixed this by adding it for you even if you remove the reference. However, we should blindly ignore the fact that when you add it back an error is generated. A lot of this was imposed on us by previous versions which did not understand multi-targeting, and simply did not get cleaned up.
You should not be removing System.Core, as a general rule. If you do, and it really bothers you that it does not show up in the "References" node in Solution Explorer, then you can manually add it back by right-clicking on the project node and selecting Unload. Right-click on the project node again and select Edit. In the editor, copy another reference line (for example, the one for "System") and paste it below the original reference inside the same ItemGroup. Change the reference name to "System.Core". Right-click on the project node and select Reload. Choose "yes" for question to save and reload.
We can't go back to the VS2008 way, as it does not understand multi-targeting. Nor did it understand true "profiles", like the Client Profile of the .NET 4.0 framework. In fact, the issue you are seeing is because the VS2008 system was upgraded to handle multi-targeting, and it is the new set of rules that are rejecting the reference.
We really just did not catch this early enough, and with a really solid understanding of how we could fix it to get a fix prior to our release. But, the fact that you should always reference, and hence never remove "System.Core", made this a Won't Fix, since this is not something that 99% of customers would ever do.
Chuck England VS Pro PM
Posted by Chuck England - MSFT on 10/4/2011 at 9:21 AM
@nZeus: The issue you are having is completley unrelated. The message you are receiving is correct, you are missing another assembly reference. You would need to include the assembly which contains the extension methods for the generic list container that you are using.
I don't have your code (and please do not post here, see below), so I can only guess as to what is going wrong. But, the first error cannot be related to System.Core, because even though you remove the reference from the project file and it does not show in the Solution Explorer, we added it automatically for you.
It is more likely that you have not added the appropiate assemblies, or you are not targeting an appropriate framework. In VS2008, it was possible to target 2.0, but still use Linq. In VS2010, this would not be allowed, because true multi-targeting will keep you from building an invalid assembly for a specific target framework.
Please note that I only caught this issue by chance. This issue is in an old database that we no longer use. Once closed, we no longer see the issues.
When reporting an issue, please file a new Connect Bug. If we find that this issue has already been filed, we will attach the issue to the other issue as a duplicate. However, we tend to get a lot of general comments regarding completely different issues.
Hence, general help and questions need to be posted in the forums under the appropriate subject.
You might've started out in school writing one-off apps in Java or Python. You may have created a basic website, cobbled together with HTML and the JavaScript saveur du jour. Either way, your code sits upon layers of fundamental building blocks, developed by thousands of others.
Eventually, you'll get to a point where you want to reuse someone else's code - or share your own! Hosting on a site like GitHub is only part of the solution, as you'd still have to include instructions on how to compile it, integrate it, etc.
If only we had a way to package up our code. 😐
What's a package?
Every app you use depends on other software, some of it bundled with the app itself, and some of it installed with your OS. For example, Notepad++ depends on libcurl.dll for transferring data during updates. The authors of Notepad++ didn't write libcurl, but it depends on it. If you rename it, then Notepad++ can't work, because it can't find that bundle of code.
Removing or renaming the code an app depends on has predictable results...
It's the same with the code we write.
Take .NET for example. When you build a solution in Visual Studio, it compiles your code, zips it up into a series of DLL files (usually one per project), and dumps them into a "bin" directory (along with some other files, but let's ignore those).
Here's one of my projects, GhostSharp. After building, I get "GhostSharp.dll" and "GhostSharp.Tests.dll" files (for the code I wrote) and "NUnit3.TestAdapter.dll" (because my test project depends on it).
All of these various DLL files are packages - that is, they're just bundles of related code. And while I used .NET and libraries in my example, other languages have their own lingo - Ruby gems, Perl modules, Java JARs, etc.
You can send these files to a friend, email them, upload them to Dropbox, or post them on your blog. You can upload the source code to GitHub, with instructions on how to manually compile them. But then how do people find them, report bugs, get notified of updates, target a specific version...?
If only we had some way to manage all these packages. 😏
What's a package manager?
Your (library, gem, module, JAR, whatever) can be shared with other projects, which reference them and call their publicly accessible functions (kind of like an API), but you have to decide how to make that code available in the first place.
Provide the source code, so they can manually compile it.
Provide the compiled code, so they can just drop it in their project.
Provide a link to the source code (GitHub). Some tools, like rebar3 for Erlang, can reference a GitHub link directly and include it in the build process, even targeting a specific branch or tag.
Host the compiled code in some central location, where others can discover it, reference it, be notified of updates, maybe even discuss it and get help.
This is the problem a package manager solves, to one degree or another. It maintains each version of your code, along with metadata you provide about it, and makes it accessible to others. You could set one up on your machine, or on a corporate intranet, but there's a lot of good public ones out there, usually organized around the language you're working in.
NuGet is the defacto package manager for the .NET ecosystem, and the one I'm most familiar with. Actually, the term "NuGet" refers to a few things, all closely related.
It's a package management system for .NET.
NuGet.org is the site that hosts these packages.
It's also a Visual Studio extension that's used to reference those packages, installed in VS by default.
When you reference a NuGet package from Visual Studio and then build, VS helpfully pulls down the projects you're depending on, as well as any projects that those projects depend on, etc, etc.
If you try publishing your own NuGet package, you can expect to see something like this. The first two images are in Visual Studio, while the last two show the contents of the generated nupkg file, and the nuspec file it contains.
Generating a NuGet package with Visual Studio
The metadata file in the upper right is typical of most package managers. You have to be able to tell the site, and people who might want to use your code, a little bit about your code. This might be a version, description, some tags the site could use to categorize it, etc.
If you want to learn more, Microsoft has a great set of intro docs covering the basics of how to use it. I found them informative even though I'm already familiar with it. You'll need Visual Studio to get the most out of it.
Most of us host something (and some of us everything) on GitHub, especially since they host private repos for free too now. I've been eager to try the GitHub Package Registry since they announced it last May - I just got access to the beta.
In their own words, GPR "allows you to host your packages and code in one place. You can host software packages privately or publicly and use them as dependencies in your projects." That doesn't really answer any of my questions though, such as:
Will it streamline the current process of uploading packages to NuGet?
Is it meant as a backup to the many package registries already available?
Or do they hope it'll become "the one registry to rule them all"?
Create a personal access token
Everything that follows pretty much came out of these docs, so I'd recommend checking them out later, and keeping them close at hand as you read through this.
The first step, no matter which language you're using to connect to the GPR, is to create a personal access token. Think of it this way - you want a third party to be able to access your data on GitHub. You could just give them your username and password and trust that they'll only access what they need. Don't do that. Ever! 🤬
Instead, create a token that grants exactly what the third party says it needs access to, and nothing more. Then it's GitHub's job to make sure it actually happens. Even though the app were granting access to is also a GitHub service, they want us to treat GPR just like anything else. It's not a bad idea actually.
So, create a new token and select the read:packages and write:packages scopes. Leave the repo scope selected! Technically, if you're repo is public you shouldn't need it... but if you're going to try using the package in VS you'll need it. I'll elaborate later. Oh, and copy the token it generates after you hit "Generate Token"or you'll be doing it over again in the next step. 😅
Push your first package
In order to do this yourself, you'll need something to publish. If you don't have a project in mind, just make a simple console app in VS that prints out "hello world!" and push it to GitHub. Or just fork and clone the tiny repo I created just for this purpose.
Checkout the "packages" tab for your repo on GitHub. When there aren't any yet, you'll get a reminder of the commands to run for pushing your first package.
First, tell NuGet it can use GPR as a source, and give it the credentials to use:
If all goes well, you'll get a confirmation message:
Package source with Name: GPR added successfully.
Then push the package via the command line. At this point, you should open your project and run Build / Pack, or open my project, open the project properties and in the "Package" tab change the package version to "1.0.1", and then build it.
Change to the directory where the package was published, probably under bin/debug, or provide the full path.
> nuget push HelloWorld.1.0.0.nupkg -Source "GPR"
Pushing HelloWorld.1.0.0.nupkg to 'https://nuget.pkg.github.com/grantwinney'...
PUT https://nuget.pkg.github.com/grantwinney/
OK https://nuget.pkg.github.com/grantwinney/ 1392ms
Your package was pushed.
If you forgot to change the version number and try pushing the package, you'll get a conflict message. Just change the package number and try again.
> nuget push HelloWorld.1.0.0.nupkg -Source "GPR"
Pushing HelloWorld.1.0.0.nupkg to 'https://nuget.pkg.github.com/grantwinney'...
PUT https://nuget.pkg.github.com/grantwinney/
WARNING: Error: Version HelloWorld of "1.0.0" has already been pushed.
Conflict https://nuget.pkg.github.com/grantwinney/ 807ms
See help for push option to automatically skip duplicates.
Response status code does not indicate success: 409 (Conflict).
That's it! Here's what it looks like on GitHub after pushing packages for my GhostSharp project. I uploaded two versions of GhostSharp - 1.0.2 and 1.0.4 - and you can see them listed in the lower-right corner.
Reference your package in VS
This, unfortunately, was a crappier experience than I'd hoped for. I'm not sure if it's a problem with the GitHub Package Registry or something else, but referencing the new package from GitHub didn't work right away. Let me back up a few steps - here's how things should work.
Create a new project, which you'll use to consume the package you just pushed to the GPR. Or if you're using the project I created, there's a couple in there already - one targets .NET Core 2.2 and the other targets .NET Framework 4.7.
Right-click your project's dependencies and choose "Manage NuGet Packages...", then switch the "Package source" to GPR. You should see anything you've uploaded for any of your personal projects. I ran into problems with this at first, but I'll explain all that later.
I can view the 2 packages I uploaded to the GPR
Including assembly files (modifying the nuspec)
This is all I've ever had to do when referencing a NuGet.org package, including my own GhostSharp package. GhostSharp is a .NET Standard project, and selecting it on this screen just works.
Unfortunately, referencing my "test" .NET Standard package from the GPR didn't work. I tried it with a .NET Core app and a .NET Framework app, but nada. It's a .NET Standard app, so it should work in both of these. 😕
Referencing the package from .NET Core (left) and .NET Framework (right) apps 😔
Restarting VS, clearing the NuGet caches, wiping out the bin/obj folders - none of it fixed it. I started suspecting something was missing from the .nuspec file VS generated but when I compared it to the GhostSharp package on NuGet.org, the layout was the same.
What ended up fixing it, although I'm still not sure why it's needed, was to include assembly files. I opened up the nupkg file that VS built, added a files node to the .nuspec file per this suggestion, then upped the version number to 1.0.1 (and renamed the nupkg file to match), and then ran the earlier command to the push it to the GPR.
<?xml version="1.0" encoding="utf-8"?>
<package xmlns="http://schemas.microsoft.com/packaging/2012/06/nuspec.xsd">
<metadata>
<id>HelloWorld</id>
<version>1.0.1</version>
<authors>HelloWorld</authors>
<owners>HelloWorld</owners>
<requireLicenseAcceptance>false</requireLicenseAcceptance>
<description>A simple app for use with the GitHub package repository.</description>
<repository url="https://github.com/grantwinney/github-package-repo-first-try" />
<dependencies>
<group targetFramework=".NETStandard2.0" />
</dependencies>
</metadata>
<files>
<file src="bin\Release\*.*" target="lib/net45" />
</files>
</package>
The result? Everything. Works. WTF.
References work? ✅ Expected output? ✅
Oooookay. My GhostSharp package on NuGet.org does not have that files node. And when I download my "test" package and compare them, before and after adding the files node, there's no change at all to the package other than the .nuspec file itself. It didn't actually include anything else in the package. Yet everything works. I'm flummoxed. But there ya go.
Include repo scope on your token - even for public repos
This was the other issue I ran into, although if you left repo scope selected on your token like I told you too, hopefully you didn't run into this one.
When I initially tried to list packages from the GPR in Visual Studio, it prompted me for a password. I tried my GitHub password, then the token string - nothing. The error message in the console was... less than helpful.
It did at least show me the URI it was trying to access, and when I entered that directly into a browser window I get the same authentication prompt. I entered my GitHub token string again, and got a much better response:
{"errors":
[{"code":"Your token has not been granted the required scopes to execute this query. The 'name' field requires one of the following scopes",
"message":" ['repo'], but your token has only been granted the: ['read:packages', 'write:packages'] scopes. Please modify your token's scopes at: https://github.com/settings/tokens."}]
}
Note the part about the additional scope. I initially thought that modifying the token to include the public_repo scope would be enough, since that allows a third party to "access public repositories", but nopedy nopers.
{"errors":
[{"code":"Your token has not been granted the required scopes to execute this query. The 'name' field requires one of the following scopes",
"message":" ['repo'], but your token has only been granted the: ['public_repo', 'read:packages', 'write:packages'] scopes. Please modify your token's scopes at: https://github.com/settings/tokens."}]
}
The solution was to leave the repo scope selected in the first place, which is why I told you to do it earlier. The docs said somewhere that that scope is only needed for private repos, but apparently not. I added the additional scope and entered my token as the password, and this was the response:
I ran into a couple irritating issues. One is a documentation issue, but the other (with the files node) I'm not sure about yet. I may test it more, or submit a bug report... or just let it go and hope someone from Microsoft discovers this post.
Discoverability
When I'm looking for a package to use, I use NuGet.org or RubyGems - not GitHub. I might end up there after clicking a link on one of the other sites. So will they make it easy to search the GPR globally somehow? Or is this really just intended as a backup to existing package management sites?
Community
What about registries that provide some aspect of community-building, or rallying around a particular language or framework? Will the GPR have something similar? I'm not really sure what I'm looking for here...
Reliability
Regarding deleting packages you've uploaded, they state:
To avoid breaking projects that may depend on your packages, GitHub Package Registry does not support package deletion or deleting a version of a package.
Under special circumstances, such as for legal reasons or to conform with GDPR standards, you can request deleting a package through GitHub Support.
This seems reasonable, and in line with other package management sites like NuGet.org and npm. Failure to do this has wreaked havoc before. But wait a sec...
GitHub allows you to delete a repository, and repositories contain your packages. So what happens then? I tried deleting (but didn't) the repository I was using to test this, and it sure seems like it would. Do the packages float around detached? 😕
Anyway, if you get beta access or they go live with everything, I'd love to hear about your experiences too. Good luck, and have fun!
I started this one right before building our new dining room table, and finished it soon afterwards. Ellen had asked for a book shelf for the kids, but something that would let them see the entire book cover instead of just the binding, like a magazine rack in the store.
As usual, some sketches first...
I knew what the general design would have to be. I mean, there's only way to build a magazine rack, right? That's right, an upside down pyramid. I'm only half-joking... 🙃
The thing I struggled with was how to hide the screws. When you're building a piece of furniture, one option is to countersink the screws and then patch the holes. Another option, and the one I prefer, is to create pocket holes from underneath so they're completely hidden.
But if I constructed the shelf from the bottom up, starting with the lowest horizontal piece (the "floor" of the shelf, as it were) and then the first two vertical pieces (the "backing" for the books to lean on), then the next horizontal piece (the second level of the shelf) could only be attached by countersinking holes into the sides of the aforementioned vertical pieces.
What I ended up doing was constructing it from the top down. See that sketch on the right? I secured the topmost horizontal piece to the topmost vertical piece first, from underneath. Then I attached two sides with pocket screws using the Kreg Jig Pocket Hole Kit. Their tools hold up amazingly well, and their driver bits, drill bits, and pocket-hole screws aren't anymore expensive than other screws and bits. I can't recommend them enough.
Attaching the topmost part of the shelf. Screws go upward into the vertical piece.
After attaching the first two pieces, I drilled pairs of pocket holes next to the first 5 screws. That was so I could attach the sides with pocket-hole screws, and everything would be hidden inside the shelf - no filling holes necessary. It was a tight fit trying to insert those screws though...
Attaching two sides, for the second level of the shelf
Rinse and repeat. In short order, I had several more levels built, for a total of 5 on each side. I measured the space on the bottom shelf wrong, so it's only 3/4" thick instead of 1.5", but it worked out anyway - a lot of kids' books are pretty thin!
Horizontal piece, 2 vertical pieces, rinse, repeat, all the way down
As I built this thing, even though I wanted it to be fairly thin, I couldn't help but notice near the bottom that there was a lot of wasted space lost inside the shelf. I was going to use the thin laminated wood for the siding, which I did, but I made a horizontal cut and used a couple piano hinges (one on each side) to expose the empty cavity behind the bottom two levels. A magnet and metal plate on each side holds the doors shut.
A hinge on the sides allows for extra storage underneath
And voila - the finished product!
Lots of room on both sides, plus extra storage for swapping books out as the kids get bored. I added casters to the bottom so it can easily roll around on the carpet - though it's a little tough when it's loaded down with books!
The kids' new reading corner
Oh, and don't forget to find someone you really trust for quality control. 😉👍
I've seen a number of questions over the years, where devs ask how they can make sure their program is the "most" whatever. The first spot in the notification tray, the first service of the startup services, the topmost of the topmost. They want to make sure that their app is first and foremost before all other apps... talk about tunnel vision.
They (or more likely, their project manager or director) would realize how ridiculous the question is, just by rephrasing the question:
How do I make my app the very <most something>, assuming another app is doing the same exact things my app is doing?
Obviously, any attempt to make sure one app is the "most" of anything opens the possibility up that two apps will try the same thing - fighting it out, struggling to become the mostest of the most. And the more aggressive the two apps become to secure that prime spot, the more they end up destabilizing the system and (inevitably) screwing over the end-user.
A Childish Approach
I mean that quite literally. I have kids, and they ask questions all the time about scenarios that benefit them alone, but when applied to everyone it's obvious the idea is unethical, or at the least unsustainable.
Can I pick a flower from that person's garden? They have so many. Well, what if everyone picked all their flowers?
Can I just cut to the front of the line at the amusement park, where my friends are already standing? It's just one more person, who cares? Well, what if everyone cut to the front to join their friends?
Can I just talk over my siblings? What I have to say is really important. Well, what if everyone feels that what they have to say is the most important and starts shouting?
It's absolutely childish to feel that your application is so important that it just must have the resource it needs - whatever that might be.
A Better Approach
When you find yourself asking how your app can be the "most" whatever, think about what you're really trying to do, and how you can do it in a way that doesn't conflict with what your users may want.
If you want your app to always be on top, why? Leave it to the user to dock it. If you need to notify them, consider toast notifications and the like.
If you want your app icon to always show in the notification area, and as the very first icon, why? The user already installed your app. If they find it compelling to have access at a moment's notice, they'll unhide the icon themselves.
Stack Exchange Inc has had a busy few weeks mucking things up. I've spent more time reading meta threads than I have in a long time, as I imagine a lot of longtime SE users have been, trying to grasp the current situation and all its implications.
Very recently, they made a small but significant change: (emphasis mine)
We don’t tolerate any language likely to offend or alienate people based on race, gender, sexual orientation, or religion — and those are just a few examples. Use stated pronouns (when known). When in doubt, don't use language that might offend or alienate.
They moved (not so) subtly from "Say something nice or nothing at all" to "Say something that fits our definition of nice or get out", and either hoped no one would notice or just didn't care if they did.
Blocking people and showing them the exit door.. in the name of "inclusion"
As a result, moderatorsareresigning or steppingback from their (completely unpaid and voluntary) roles all over the network, devs are pulling their support for SE's "Teams" product, and a whole lot of people are kinda just tossing in the towel.
They've promised to change some things going forward, but have refused to rollback the CoC, reinstate the moderator who was unceremoniously let go, or go back to the press admitting they were wrong.
My Thoughts (in no particular order)
I'm not a moderator, and my primary involvement has been on Stack Overflow where your gender is least likely to come up, so I'm probably in an unlikely position to cross the almighty CoC. Yet the actions of SE still worry me - the way they rolled it out and treated Monica sends a message to everyone in the network.
Your hill is not my hill
Anyone who's used social media for more than a few hours knows that every day brings a different issue to feel offended about, a different "hill to die on". Some people relish the opportunity to identify with these issues, and crush anyone who doesn't share their zeal - or God forbid, takes the opposing view.
As Bill Maher said (posted by one of the first mods who resigned):
The difference is that liberals protect people, and P.C. people protect feelings. They don’t do anything. They’re pointing at other people who are somehow falling short of their standards, which could have changed three weeks ago. They’re constantly moving the goalposts so they can go, “Gotcha!”
I think someone decided this was their hill, and they decided to make it everyone's hill. So that begs the question, where will we be marched tomorrow? Next year? I've said as much before - this network is the unique collaborative effort of millions of people, many of whom do not agree with the ideological stances individual SE staff may hold. But by making them "official" corporate stances, they're marching us up their hill too.
I've had coworkers find my answers on the site. I've had teammates tease me for being the "Stack Overflow guy". It follows that the more I contribute, the more they'll associate me with the things that SE values.
Caleb, who resigned from Christianity.SE, said it better.
The reason I am withdrawing my support now is that I am being asked not just to limit the scope of my voice on somebody else's platform but to lend my own voice in support of their cause. The new "tolerance" is tolerant of everything except ideological disagreement. It is forced conformity.
What's that stabbing pain in my back?
It is, I think, reasonable to expect a site you're participating in and contributing to will support you, or at least work fairly with you. SE failed here, in one post seemingly referring to forcing Monica out as "We learned (or were painfully reminded, rather) to never ship at 6 PM (EDT) on a Friday." Um, ouch.
They destroyed Monica's credibility, which affects every aspect of her real life including job prospects and anyone else who might search her name online.
[Joel] encouraged people to use their real names and led by example. Perhaps it was naive, but I followed suit. SE's actions, particularly going to the media, have the potential to affect my actual livelihood. SE never paid me but they can affect my future sources of income. Back then I thought the biggest risk of using my real name was offline trolling (which has happened). I was wrong.
I have zero trust that SE would offer me support in a dispute. If I choose to go for neutral instead of affirmative, I could very well end up the next one they publicly humiliate. They claim to have instituted a "no comment" policy for the press going forward... that doesn't change the fact that an SE employee was pissed off enough to do it in the first place. And I think they'd happily suspend my account while continuing to make a profit from my contributions with their ads.
Assuming malicious (bad faith) intent
I think of "be nice" as similar to the tenet "treat others as you'd want to be treated". It shows trust in the community to define what "be nice" is, and assumes everyone is acting in good faith and that their actions are "sincere conduct free from malice".
But SE seems, more and more, to believe the community is incapable of following the spirit of "be nice" (although I haven't seen specific reasons why), so they must now spell out the letter of the law. In compelling a particular speech, they send a message that they've given up on a genuine change of spirit. It makes them feel better about themselves, as if some progress has really been made.
The feeling is mutual, as a lot of the network has lost faith in them too.
With sites on nearly 200 topics incorporating millions and millions of people from all around the world, the exact set of rules at the very top should be pretty vague, allowing for individual communities to define the rules they need in order to be civil. If SE Inc says "be nice", the exact details should be hashed out by each "community" over time, which is basically what happens on each meta site.
What's next for me?
I closed out my profiles across the network, except Stack Overflow, where I'll continue to upvote good questions and answers.
I won't be making any other contributions (flagging, editing, answering, downvoting to signal low value, etc).
If I see an interesting question and think I can provide a valuable answer, I'll do it on my own blog instead of contributing to the SE machine.
I'll link this post in my bio, so any visitor that stumbles across it is aware of what they're stepping into by participating.
StackOverflow sees quite a few threads deleted, usually for good reasons. Among the stinkers, though, lies the occasionally useful or otherwise interesting one, deleted by some pedantic nitpicker - so I resurrect them. 👻
Note: Because these threads are older, info may be outdated and links may be dead. Feel free to contact me, but I may not update them... this is an archive after all.
When I learned to program, (30 years ago) I was using a ZX-81 which used line numbers to label every line of code. The Sinclair QL I had next did support this too but also allowed the use of subroutines. The first GWBasic/ABasic interpreters also supported the use of line numbers instead of the "modern techniques" of the modern BASIC compilers.
Sample:
10 IF X = 42 GOTO 40
20 X = X + 1
30 GOTO 10
40 PRINT "X is finally 42!"
Now, purely for some dumb nostalgic feeling that I want to feel by going back to my roots, I just wonder... Is there still some BASIC compiler or Interpreter that supports this (obsolete) line-numbering technique? One that is kept up-to-date with the more modern operating systems, that is...
(OS doesn't matter, although I would prefer for one that supports Windows Vista 64-bits.)
Use a text editor. Preferably also capable of code coloring that supports BASIC (Notepad++, for instance). That should be enough. These editors even throw support for external tools, so you can setup your build environment by running the compiler/interpreter from within the editor.
As for the other question:
Now, purely for some dumb nostalgic
feeling that I want to feel by going
back to my roots, I just wonder... Is
there still some BASIC compiler or
Interpreter that supports this
(obsolete) line-numbering technique?
One that is kept up-to-date with the
more modern operating systems, that
is...
Yes. But careful! ;) The line numbers on the left where never a part of the language. They existed only for convenience (especially because of the goto x statements). The interpreter didn't care about them, much like modern languages.
Anyways, you are looking for the most excellent BASin. Go to http://www.worldofspectrum.org/emulators.html and search that page for BASin. The changelog can be seen at the author's blog. But the download is only available from World Of Spectrum
Comments
The line numbers on the left were used as GO TO targets, and were used in editing. The lines would automatically be sorted in line number order, and if you typed a line with the same line number as a previous one you'd overwrite the old one. They were an integral part of the language system.
– David ThornleySep 15 '09 at 20:04
those were editor features. Please do not downvote unless you have very specific knowledge of what you are talking about. The BASIC interpreter is well known and documented. Line numbers are internally generated. The editor line numbers are not used.
– Alexandre BellSep 15 '09 at 21:01
I upvoted! I like those emulators!!! Even though these are just emulators of old systems while I would prefer a more modern compiler/interpreter. (Something that can handle modern amounts of memory.)
– Wim ten BrinkSep 15 '09 at 21:26
About the editor, I don't mind if I just have to use Notepad. If need be, I could write my own. :-) I'm more interested in a compiler or interpreter.
– Wim ten BrinkSep 15 '09 at 21:30
To be honest, it doesn't matter which BASIC dialect it is. What is important is that it's modernized enough to be able to run on modern hardware. It's actually going to be used for some real bad programming, where I want to use spaghetti code to write a reasonable complex project and then check if others can decipher the code and perhaps even find bugs. Basically, a test of wits, since you need to be real good to be able to write bad code on purpose. :-) And even better to be able to understand it when reading the code again. (Obfuscated coding!)
– Wim ten BrinkSep 16 '09 at 8:47
Have a look at QB64, its main goal is QBasic compatibility (and QBasic supported, but did not require, line numbers. You can use them if you want). Of course it runs on modern systems and provides features you wouldn't dream of under DOS. I'm a seasoned QBasic hacker, and I love doing some awful spaghetti code once in a while ;)
Chipmunk Basic supports line numbers, and is available for Mac OSX (PPC and Intel), Windows (2K/XP), and Linux. HotPaw Basic is pretty much the same language, for iPad and iPhone. Both of these support fully numbered programs.
QB64 is an implementation of the QBASIC language for modern machines, in 64 bit clean code. It supports fully numbered and target point only numbering, as well as text labels. It's available for OSX Intel, Windows XP, and Linux.
Answer by an unregistered user (Jun 06, 2011)
If you go to telehack.com you can run an old basic interpreter still using 10, 20, 30 etc right from the web browser. It has at least 100 examples already loaded and lots more.
As of a few years back, GWBASIC, the original MS-DOS BASIC interpreter was still around, and still usable from a DOS command window. I occasionally still use it when I want to grind a few numbers for e.g. some ham radio project.
The last thing I remember doing with it was calculating capacitance budgets for simple one-transistor Colpitts VFOs. I MIGHT have thrown together something to calculate turns counts for toroid inductors using various cores, I don't remember offhand.
Yes there is one, Microsoft GW-BASIC interpreter. Download from my mediafire: http://www.mediafire.com/download.php?souxlzvcsk5cxes, The password for the ZIP archive is 'lotsofmetalmo'. I have also included BASCOM basic compiler, It compiles the source codes into Executables, read the 'Readme.txt' before proceeding.
Try BBC Basic for Windows (http://www.rtrussell.co.uk/). This version of Basic includes a good colour text editor, and links to many Windows routines. It doesn't need line numbers, but still supports GOTO nnn, and you can add optional line numbers if you want to be nostalgic!
I still have old ZBasic compiler for MS-DOS. It allows optional line numbers to be used.
ZBasic is no more available on PC, but on Mac, it is still available as FutureBASIC. However, I am not sure if FutureBASIC allows line numbers to be used.
FreeBASIC, which is upward compatible with QBasic, allows numeric labels to be used (in addition to text labels). A numeric label looks just like a line number. I think you could use such a label on every line, effectively making the program to have line numbers.
See the documentation of labels, ProPgLabels at FBWiki.
It's the end of October and I've submitted 4 PR's for yet another Hacktoberfest. In exchange for a $10 shirt and some stickers, I spent more hours than I meant to, writing code for OSS projects I didn't know existed, whose owners may not even bother merging my code.
I don't care about the shirt - my wife usually ends up with them, which is fine by me. It's not the stickers - I just give 'em to my kids. Let's face it, the prizes ain't that much to get excited about. So I find myself wondering... but why?
I can think of a few reasons - and they're not all as altruistic as I'd like.
It's more fun to be on the ship than waving it off
Sometimes it's fun just to know you were part of something. An event was announced, people were jumping in, I did too.
It's the reason 11 million people signed up to have their names etched on a microchip on the next Mars rover. Even if your part in something was tiny in the grand scheme, at least you left a mark.
If a tree falls in the woods and no one tracked it, did it really happen?
I can't deny it - I like the idea of completing 4 tasks instead of, well, not completing 4 tasks. We live in a world obsessed with numbers and quantities and "progress". If you exercise and then realize your Fitbit was off, did it even count? Have you ever completed a task, realized it wasn't on your to-do list, then added it and immediately crossed it off? No? Liar! Liarrrrrrrr.
It's part of the reason I quit socialmedia - I love seeing progress in whatever form it's offered (more likes, more commits, more karma), and like most people I start conforming my actions to make sure the fake internet points keep coming. That's only healthy if the ends justify the means - or if the means are worthy of the end. 🙄
For me, I think it does.
A brick plus a brick makes... well, a very small building
A couple years ago, I contributed some refactoring and test coverage to a genealogy project. I got to share my knowledge, help someone else out, and learn new things about C#. I contributed a few bricks of my own, and someone else will add theirs on top of of mine. Sounds like a win-win for everyone.
If you're interested, here were my contributions this year. The first two were about sharing my knowledge of building out test suites and configuring CI builds.
Then I found a project concerned with static code analysis - think code that validates code. If you've been programming for awhile, it's sorta like meta-programming. Or something.
The last one was a stretch for me, because it concerns a technology I have little experience with but that I'd like to know more about - GraphQL. I wanted to try implementing it in a real world project, not just something I doctored up for a blog post.
StackOverflow sees quite a few threads deleted, usually for good reasons. Among the stinkers, though, lies the occasionally useful or otherwise interesting one, deleted by some pedantic nitpicker - so I resurrect them. 👻
Note: Because these threads are older, info may be outdated and links may be dead. Feel free to contact me, but I may not update them... this is an archive after all.
The language feature implementation status page of the Roslyn project lists several implemented or planned features of C# 6. I couldn't find any information on what some of them mean though.
C# seems to be taking the path of making the language more expressive by introducing a lot of special cases to the syntax and semantics, instead of opting for more light weight and general approaches (not that it's necessarily bad).
– Erik AllikApr 5 '14 at 13:27
Well, #5 seems pretty obvious to me. Normally in object initializers, you can only set the values of fields/properties. Now you can also wire event handlers. (EDIT: yay!)
– Chris SinclairApr 5 '14 at 13:38
Just watch the video of the Build Conference session, "The future of C#". Ask only one question per post.
– Hans PassantApr 5 '14 at 13:41
#7 refers to the fact that you can create a params array input. These methods also mean that in addition to passing in variables like myMethod(0, 1, 2, 3) you could pass it in as an array: int[] myInput = new []{0, 1, 2, 3}; myMethod(myInput); But it had to be an array. With "params IEnumerable", I would assume it's pretty much the same behaviour as "params Array" but you could then pass in any IEnumerable<T> instead of being forced to use arrays. IEnumerables tend to be more common than arrays so this is a nice convenience.
– Chris SinclairApr 5 '14 at 13:43
@ChrisSinclair: If there were a way of using a single specification to attach events at construction and detach them at disposal, that would IMHO be a major win, especially if it could take care of detaching events when the constructor fails. Although one can often get away with attaching events and never detaching them, there's IMHO no reason that should have become common practice, but for the lack of proper language support for event cleanup.
– supercatApr 6 '14 at 21:10
This question is not ideal: Too broad, somewhat vulnerability to becoming out of date (these changes are not finalized). They're better addressed in a blog than on SO. That being said, the individual questions are all very precise syntax questions, so it's easily addressed. I won't vote to close, despite objectively thinking I ought to.
– BrianApr 8 '14 at 13:12
A couple days after this question was asked, the status page was updated. Now, directly above the feature table, there are links to pdf files describing the features in detail.
– BrianApr 9 '14 at 16:45
@AmirSaniyan - C# 1.0 was never intended to be the final form of the language. It's always been known that it was going to grow and change, and it should.
– Maurice ReevesApr 10 '14 at 16:49
I see that C# is still playing catchup to VB.NET - god, if i had to write in C# I would totally have no motivation to write software what-so-ever (at all), it would be a drag. If you're not using VB.NET, you don't know what you're missing out on. VB.NET has slightly more features than C# (all things included), as Jon Skeet himself has said previously.
– Erx_VB.NExT.CoderApr 13 '14 at 16:15
That roslyn is openSource it's a good idea while MicroSoft c# & VB are not.
– BellashMay 5 '14 at 11:54
It is truly amazing that this question is not only closed, but has a delete vote. Because yes, there are infinite possible answers to "what does the nameof operator do?" (wtf, this is C# not C Plus Equality)
– McGarnagleJun 27 '14 at 17:25
Dictionary, indexer and event initializers are an extension of the collection and object initializer scenarios:
o =newFoo{ A =123};// object -- C# 3
c =newList<int>{123};// collection -- C# 3
d =newDictionary<int,int>{[1]=2};// dictionary
j =newJSObject{ $x = y };// indexed
b =newButton{OnClick+= handler }// event
is just a short way of writing:
temp =newFoo(); temp.A =123; o = temp;
temp =newList<int>(); temp.Add(123); x = temp;
temp =newDictionary<int,int>(); temp[1]=2; d = temp;
temp =newJSObject(); temp["x"]= y; j = temp;
temp =newButton(); temp.OnClick+= handler; b = temp;
Indexed member access
Indexed member access now you can figure out:
j.$x
is just
j["x"]
Expression-bodied members
Expression bodied members:
int D => x + y;
is
int D { get {return x + y;}}
Semicolon operator
Semicolon operator is the sequential composition operator on expressions, just as semicolon is sequential composition on statements.
(M(); N())
means "evaluate M for its side effect and N for its value."
Params IEnumerable
Params IEnumerable is the ability to make params methods that take an IEnumerable<T> instead of a T[]. Suppose you have:
void M(paramsint[] p){foreach(int i in p)Console.WriteLine(i);}
And now you have to say
M(myQuery.ToArray());
M doesn't use the fact that p is an array. It could be
void M(paramsIEnumerable<int> p){foreach(int i in p)Console.WriteLine(i);}
then if M is renamed you have to remember to change the string. But
Log(nameof(M)+" called");
if M is renamed then either the renaming refactoring will change the symbol, or if you do it manually, the program will stop compiling until you fix it.
Comments
Ah, much better answers, but I'm not sad because of who posted it :). So I take it the semi-colon operator (with accompanying parentheses) is like the comma operator in C/C++?
– J FApr 5 '14 at 13:50
Great answer, just a couple of questions: 1. I'm not quite sure I understand the difference between indexed member and dictionary initializers. 2. What's the use case for the semicolon operator?
– MichaelApr 5 '14 at 13:54
(1) I'm not quite sure I understand either. :-) I'm not yet convinced of the utility of the $ features; they seem to save three keystrokes. (2) the use case is for when you have something that can only go in an expression context but you want to produce a side effect in a particular order. It is a weak feature on its own but it combines very nicely with other features.
– Eric LippertApr 5 '14 at 14:06
As I said below (darn you, Lippert, and your fast fingers!), the .$ operator is the same as the longstanding ! operator in VB. It's basically a very simple way to do expandos without having to support the huge amount of machinery you need for something more full featured like "dynamic", and covers most of the use cases you care about. (I always loved the ! operator and never thought it got the respect it deserved. Nice to see the C# team finally come around...)
– panopticoncentralApr 5 '14 at 14:28
@panopticoncentral: Hey Paul, good to hear from you. Indeed, the justification for the feature is that it is a lighter-weight "dynamic".
– Eric LippertApr 5 '14 at 14:33
So if I have understood correctly, the new semicolon operator is just that horrible C and C++ comma operator?
– Manu343726Apr 5 '14 at 17:11
It seems like it's supposed to be more permissive than C and C++'s comma operators: the example uses a declaration as its LHS.
– user743382Apr 5 '14 at 19:05
@hdv that's yet another new feature proposal, putting declarations in expressions. It works very nicely with the composition operator.
– Eric LippertApr 5 '14 at 19:37
Is there anything you can do with the initializers, indexed member access, expression bodied members and the semicolon operator that you couldn't do with more or different code? They all seem like little syntactic shortcuts rather than new features. Or am I missing something?
– Patrick MApr 5 '14 at 20:47
@PatrickM: You can put them in contexts that expect only an expression, like a query comprehension.
– Eric LippertApr 5 '14 at 21:16
I don't understand the dictionary initializer thing. The counterexamples that keep getting given are m = new Dictionary<int, int>(); m[1] = 2; and so on. But we already have a real dictionary initializer syntax: new Dictionary<int, int> { { 1, 2 }, { 3, 4 } }. Any idea why we needed another syntax for it?
– AaronaughtApr 6 '14 at 15:13
@Aaronaught: Are you guaranteed that all dictionary objects have an Add method that takes pairs? Suppose for example you have an object that represents an XML document where element["attr"] = "value" is legal. Do we have a guarantee that this object has a method Add that happens to take two strings and adds an attribute-value pair? It seems plausible that the author of that object neglected to implement such a method.
– Eric LippertApr 6 '14 at 15:18
Fair enough. It's unfortunate that both of the examples that were chosen (Dictionary<TKey, TValue> and JObject) are both types that do have such a method and already support the existing initializer syntax. Are there no better examples in the BCL, where this alternative can't be used?
– AaronaughtApr 6 '14 at 15:34
I know you don't work there any more, but do you know if a.$name is refactorable? I.e. renaming it and only having it apply to the right object/property?
– jmorenoApr 6 '14 at 23:05
@jmoreno: I don't know; that's a good question. F# has a feature called "type providers" that might prove useful in such a scenario.
– Eric LippertApr 7 '14 at 0:11
It is getting much harder to look at C# code and know what spellings are being checked by the compiler and what is not, e.g. a.$spalling looks like the compiler will check that spalling is valid, unless you are an expert.
– Ian RingroseApr 7 '14 at 11:15
@IanRingrose If you're relying on your compiler to spellcheck for you, you're going to have a bad time...
– AlgorathApr 7 '14 at 14:11
@Algorath, I like static typing, code that does not use static typing needs more rewiewing to trap that sort of error. I hope the "a" in a.$name can be a string constant.
– Ian RingroseApr 7 '14 at 15:07
@EricLippert: Should design flaws (missing Add method) really be fixed by adding new language features? One of the design goals of C# was to create a simple language. Is C# simple? No. Every new feature added lets C# deviate more from this goal.
– Olivier Jacot-DescombesApr 19 '14 at 21:09
Can I have a vote to not implement the indexed member and expression bodied member sugars? Feels bit of deviation from C# syntax we are used to. And also semi-colon operator? The same thing can be done { M(); return N(); }. Not sure how much value it adds for the added syntax. Everything else looks awesome :)
– nawfalJul 22 '14 at 7:30
@nawfal: Sure, but a comment on StackOverflow isn't the right place. Join the roslyn forum on roslyn.codeplex.com if you want to express an opinion to the language designers.
– Eric LippertJul 22 '14 at 13:43
Like the name says, this feature simplified initializing a dictionary (a.k.a. a hash table) in-place. For example, the above code would have to be normally written as a series of statements (a = new JObject(); a["x"]=3; a["y"]=7).
This is particularly useful when you want to immediately pass the dictionary as a parameter, initialize it as a part of construction, or use it as a part of a general object initializer (esp. for anonymous types).
Indexed member initializer
newJObject{ $x =3, $y =7}
This really just combines #1 and #3 (see below).
Indexed member access
c.$name = c.$first +" "+ c.$last;
This is syntactic sugar for the index operator ([]). So c.$name is the same as c["name"]. This enables string indexing to look more like general property access. (FYI, this is basically the same as the ! operator that VB has had for forever.)
Expression-bodied members
publicdoubleDist=>Sqrt(X * X + Y * Y);
Syntactic sugar for simple function declarations. The example would currently be written as something like public double Dist { get { return Sqrt(X * X + Y * Y); } }.
Event initializers
newCustomer{Notify+=MyHandler};
This allows you to set up an event handler in the context of an object initializer. Today you'd have to write:
Customer x =newCustomer{.Name="Bob"};
x.Notify+=MyHandler;
but now you can write it all inline.
Semicolon operator
(var x =Foo();Write(x); x * x)
This is the equivalent of the comma operator in C -- you can string expressions together and the "value" of the expression is the last one. Like in the example, it allows you to inline declare a variable, do something with it, and then return another value from the expression.
Params IEnumerable
intAvg(paramsIEnumerable<int> numbers){…}
Today a "params" parameter has to be an array, but if all you're going to do is enumerate the values of the parameter (i.e. you don't need a full array) and the argument you're passing in isn't an array (but is enumerable), you can save the array allocation and copy. Useful especially for LINQ scenarios.
NameOf operator
string s = nameof(Console.Write);
Useful for diagnostic or exception scenarios (like ArgumentException), where you want to print out or say the name of something that failed or wasn't correct but don't want to have to write it out in a string and have to make sure it stays in sync. Or in places where you're doing a notification that includes the name of the event/property that fired.
Comments
I added some formatting to your post considering it contains valuable content and this will be one of those popular questions. This will make it easier for people to skim through the different sections.
– Jeroen VannevelApr 5 '14 at 14:44
Thanks so much! I was typing furiously in the hopes of getting ahead of the pack (again, darn that Lippert!) and was going to come back and do that. Much appreciated.
– panopticoncentralApr 5 '14 at 14:47
Wouldn't Dist => Sqrt(X * X + Y * Y) be a property? I think you should write Dist() => for a method.
– KobiApr 6 '14 at 6:10
Just like the $x it can make string names of properties (for instance used for INotifyPropertyChanged) refactorable (you can rename the property and you do not have to worry about these references to follow. Up to .net 4 it took some large lambda expression to get the same effect. In the case of the $ syntax it still will have to match the fieldname but everything is better than magic strings when refactoring (or huge amounts of string constants).
– wvd_vegtApr 14 '14 at 8:28
Shared with attribution, where reasonably possible, per the SO attribution policy and cc-by-something. If you were the author of something I posted here, and want that portion removed, just let me know.
Back when I still used Twitter, I implemented Vicky Lai's "ephemeral" Go script to delete tweets older than a certain age. No one reads your old tweets unless they have a reason to, like you're running for public office or applying for jobs. And people say far too many stupid, snarky, callous things online to just have it all lingering out there attached to their names forever, yet that's what most people do.
Recent events at SE have me thinking about my contributions to their network too, and which of those I should/could clean up. Like Twitter, comments on SE have a limited shelf-life, and there's no reason to leave them hanging around. With the help of the Stack Exchange API, you can even automate deleting your comments.
Comments are second-class citizens, eligible for deletion at any time. Deleting your comments might ruin some conversation threads, but SE doesn't care.
Authentication
Before you can do much of anything, you'll need to prove who you are, which allows you to make a greater variety and number of API calls. It's a convoluted multi-part process that makes simple experimentation and personal scripts a hugepain.
1. Create a new application (to get a key and client id)
Open Stack Apps, create a new account if you don't have one yet, and fill in the required info. Since this is just an app for your own use, don't worry about most of the values. I specified a GUID for my app name, and "localhost" for the domain and website.
2. Create a post for your app (required for write access)
Add "PLACEHOLDER -" to the beginning of the title.
If you decide at some point that you do want to make this thing public, there are more detailed instructions here. But we don't need those right now. It's probably a good idea to delete the post when you're done using your app/script.
It'll be interesting to see if this survives. Given the level of snark around creating practice apps, I don't know if my "placeholder" post will survive the couple of months I intend to run this script.
3. Edit your application (to include the URL of the post)
Apps must have a registered Stack Apps post to write. All content created via the API will have links pointing back to an app's Stack Apps post, to aid in giving an app's author feedback and in reporting abusive content.
You can add or change your app's registered post from the Stack Apps App Management page. Removing a registered post will disable write for your application, as will deleting the registered post.
Now go back to your apps and open the one you just created. Scroll down to "Apps Post" and start typing the title of your new post. Select it, save, and voila. I didn't see the "Apps Post" field when I created the application originally - maybe I missed it?
After you submit changes, you'll see your new post title on the summary page
4. Generate an access token (using your new client id)
This is a one-time process with the no_expiry scope applied - something I would not advise doing if you were creating a real app, although the default expiry of 1 day seems excessive.
The last two bullets are due to the write_access and no_expiry scopes.After clicking "Approve", you receive an "access_token" with no expiration date.Nothing to it.
Kicking the Tires
Okay, we finally have all the cogs and bolts in place, and it's time to see how the automatic back scratcher StackExchange API actually works..
Get a Comment
I'd suggest using the Postman Client to make your API requests when you're experimenting. It's easy to use and keeps everything sync'd in the cloud. Here's my request, and the resulting 3 comments I got back (because I set the pagesize).
Delete a Comment
Now we can try deleting a comment. Just grab the first one returned in the previous GET request, and do a POST to delete it (I hate that). All gone (hopefully).
Now you see it!Now...... you don't!
Many More Advanced
I don't feel like running this everyday to clean up my old comments, so I created a .NET Core app called SECommentHoover that uses RestSharp to get my comments and then delete them one-by-one. There's a built-in throttle to limit how fast API calls are made. I won't repost the code here - check out the repo.
Here's the output from running it. First it deletes some upvoted comments, which are limited to roughly 20 a day. Then it deletes other comments, up to the 10,000 daily quota limit.
Based off of work I did awhile back to tweet random blog posts, I setup an AWS Lambda job to run this once a day. Now I don't have to click on a bunch of individual comments one-by-one, and that makes me happy.
Get the code
Clone SECommentHoover to your machine, open it in VS, and build it.
Find the DLL files in bin/Debug/netcoreapp3.0 and select them all.
Zip up the files in the directory, but not the directory itself.
Create a new function and choose whatever the latest .NET Core runtime is, currently .NET Core 2.1 (even though my app is .NET Core 3.0).
The name of the function and the role you create don't matter.
Under "Function code", click "Upload" and upload your zip file.
Set the handler as SECommentHoover::SECommentHoover.Program::Main
Under "Basic settings", decrease the memory to 128MB and increase the timeout to the max allowed 15 minutes... more on that in a moment.
Set the environment variables that you'll need. These include SE_NETWORK_SITE (i.e. "stackoverflow"), MS_BETWEEN_API_CALLS (i.e. "500"), and ACCESS_TOKEN and KEY which are generated in the previous steps.
Click "Add Trigger", choose "CloudWatch Events", then "Schedule expression", and set a cron expression, like cron(0 12 * * ? *) for everyday at noon UTC. While you're testing this, you might want to uncheck the "Enable trigger" box.
A word of caution
The max AWS Lambda timeout is 15 minutes. If you're planning on making 10,000 requests per day (the max allowed by the API), you'd have to set the MS_BETWEEN_API_CALLS value to about 90ms. While that's still far above the 30ms that breaks the "30 requests per second" rule, and we're just doing simple deletes instead of requesting tons of data, you might want to set the value higher and schedule your job to run a couple times a day. Or not. Caveat emptor.