"{\"inputs\":\"# seed \uac12\uc744 111\\nnp.seed(111)\\n\\n# \ud14c\uc2a4\ud2b8 \ub370\uc774\ud130\ub97c \uc0dd\uc131\ud558\ub294 \ud568\uc218 \uc815\uc758\\ndef CreateDataSet(Number=1):\\n \\n Output = []\\n \\n for i in range(Number):\\n \\n # 2009\ub144 1\uc6d4 1\uc77c\ubd80\ud130 2012\ub144 12\uc6d4 31\uc77c \uc0ac\uc774\uc5d0 \uc788\ub294 \uc6d4\uc694\uc77c\uc5d0 \ud574\ub2f9\ud558\ub294 \ub0a0\uc9dc\ub97c \uc0dd\uc131\\n rng = pd.date_range(start='1\\/1\\/2009', end='12\\/31\\/2012', freq='W-MON')\\n \\n # rng\uc758 \uae38\uc774\uc640 \uac19\uc740 \ud06c\uae30\uc758 \ub79c\ub364\ud55c \uc218\uc5d0 \ub300\ud55c \ub9ac\uc2a4\ud2b8 \ub9cc\ub4e4\uae30\\n # \uc774\ub54c, \ub79c\ub364\uc218\ub294 25\uc640 1000 \uc0ac\uc774\uc5d0 \uc788\ub294 \uc815\uc218\\n data = np.randint(low=25,high=1000,size=len(rng))\\n \\n # Status\uc5d0 \ub300\ud55c \ub9ac\uc2a4\ud2b8 \ub9cc\ub4e4\uae30\\n status = [1,2,3]\\n \\n # rng\uc758 \uae38\uc774\uc640 \uac19\uc740 \ud06c\uae30\uc758 \ub79c\ub364\ud55c statuses \ub9ac\uc2a4\ud2b8 \ub9cc\ub4e4\uae30\\n random_status = [status[np.randint(low=0,high=len(status))] for i in range(len(rng))]\\n \\n # State\uc5d0 \ub300\ud55c \ub9ac\uc2a4\ud2b8 \ub9cc\ub4e4\uae30 \\n states = ['GA','FL','fl','NY','NJ','TX']\\n \\n # rng\uc758 \uae38\uc774\uc640 \uac19\uc740 \ud06c\uae30\uc758 \ub79c\ub364\ud55c states \ub9ac\uc2a4\ud2b8 \ub9cc\ub4e4\uae30 \\n random_states = [states[np.randint(low=0,high=len(states))] for i in range(len(rng))]\\n \\n Output.extend(zip(random_states, random_status, data, rng))\\n \\n return Output\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\n\ubd84\uc11d\uc744 \uc704\ud55c \ud14c\uc2a4\ud2b8 \ub370\uc774\ud130\ub97c \ub9cc\ub4e4\uc5b4 \ubcf4\uc790.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"4. Sorting\\nSort a list in place (modifies but does not return the list):\\n\",\"targets\":\"simpsons.sort()\\nsimpsons\\n\\n#reverse sort\\nsimpsons.sort(reverse=True)\\nsimpsons\\n\\n#sort by a key\\nsimpsons.sort(key=len)\\nsimpsons\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Kullback-Leibler_divergences_in_native_Python__Cython_and_Numba.ipynb\\\".\\nThe first task is:\\nKL-UCB indexes\\nGaussian\\nCan you write Python code for it?\\n\",\"targets\":\"\\nklucbGauss_c = lambda x, y: kullback.klucbGauss(x, y, 0.25)\\n\\ntest_fs([klucbGauss, klucbGauss_numba, klucbGauss_cython, klucbGauss_c], lambda: (r(), r()))\\n\\n%timeit klucbGauss(r(), r())\\n\\n%timeit klucbGauss_numba(r(), r())\\n\\n%timeit klucbGauss_cython(r(), r())\\n\\n%timeit klucbGauss_c(r(), r())\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# Users who have most followers\\n\\nsocialDFpq\\n\\n# Users who have most mentions\\nmentionDFpq\\n\\n# Of the top 5 followed users, how many mentions has each one?\\n\\n# top_f contains \\\"top 5 users who have most followers\\\"\\ntop_f = \\n\\ntop_f.\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nSpark SQL using DataFrames API\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"00_Intro_Python_Jupyter_notebooks\\/2_Jupyter_strings_and_lists.ipynb\\\".\\nThe first task is:\\nExercises:\\n\\nUse the count() method to count how many letters 'a' are in AE_quote?\\nUsing the same method, how many isolated letters 'a' are in AE_quote?\\nUse the index() method to find the position of the words 'genius', 'judge' and 'tree' in AE_quote. \\nUsing slice syntax, extract the words in exercise 3 from AE_quote. \\n\\nTwo more string methods turn out to be useful when you are working with texts and you need to clean, separate or categorize parts of the text. \\nLet's work with a different string, a quote by Eleanor Roosevelt:\\nCan you write Python code for it?\\n\",\"targets\":\"\\nER_quote = \\\" Great minds discuss ideas; average minds discuss events; small minds discuss people. \\\"\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Data\\nThe data used in this notebook is also available on FigShare:\\nGeers AJ, Larrabide I, Radaelli AG, Bogunovic H, Kim M, Gratama van Andel HAF, Majoie CB, VanBavel E, Frangi AF. Reproducibility of hemodynamic simulations of cerebral aneurysms across imaging modalities 3DRA and CTA: Geometric and hemodynamic data. FigShare, 2015. DOI: 10.6084\\/m9.figshare.1354056\\nVariables are defined as follows (TA: time-averaged; PS: peak systole; ED: end diastole):\\n* A_N: Aneurysm neck area\\n* V_A: Aneurysm volume\\n* Q_P: TA flow rate in the parent vessel just proximal to the aneurysm\\n* Q_A: TA flow rate into the aneurysm\\n* NQ_A: Q_A \\/ Q_P\\n* WSS_P: Average TA WSS on the wall of a parent vessel segment just proximal to the aneurysm\\n* WSS_A: Average TA WSS on the aneurysm wall\\n* NWSS_A: WSS_A \\/ WSS_P\\n* LWSS_A: Portion of the aneurysm wall with WSS < 0.4 Pa at ED\\n* MWSS_A: Maximum WSS on the aneurysm wall at PS\\n* 90WSS_A: 90th percentile value of the WSS on the aneurysm wall at PS\\n* N90WSS_A: 90WSS_A normalized by the average WSS on the aneurysm wall at PS\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\ndf_input = pd.read_csv(os.path.join('data', '3dracta.csv'), index_col=[0, 1])\\ndf_input\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Build the Decoding Layer\\nImplement decoding_layer() to create a Decoder RNN layer.\\n\\nCreate RNN cell for decoding using rnn_size and num_layers.\\nCreate the output fuction using lambda to transform it's input, logits, to class logits.\\nUse the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.\\nUse your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.\\n\\nNote: You'll need to use tf.variable_scope to share variables between training and inference.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\ndef decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,\\n num_layers, target_vocab_to_int, keep_prob):\\n \\\"\\\"\\\"\\n Create decoding layer\\n :param dec_embed_input: Decoder embedded input\\n :param dec_embeddings: Decoder embeddings\\n :param encoder_state: The encoded state\\n :param vocab_size: Size of vocabulary\\n :param sequence_length: Sequence Length\\n :param rnn_size: RNN Size\\n :param num_layers: Number of layers\\n :param target_vocab_to_int: Dictionary to go from the target words to an id\\n :param keep_prob: Dropout keep probability\\n :return: Tuple of (Training Logits, Inference Logits)\\n \\\"\\\"\\\"\\n # refer to chc170's code\\n with tf.variable_scope(\\\"decoding\\\") as decoding_scope:\\n dec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)\\n output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope)\\n train_logits = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob)\\n decoding_scope.reuse_variables()\\n infer_logits = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, target_vocab_to_int[''], target_vocab_to_int[''], sequence_length-1, vocab_size, decoding_scope, output_fn, keep_prob)\\n \\n return train_logits, infer_logits\\n\\n\\\"\\\"\\\"\\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\\n\\\"\\\"\\\"\\ntests.test_decoding_layer(decoding_layer)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"%bigquery schema --table $miRNA_BQtable\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nFrom now on, we will refer to this table using this variable ($miRNA_BQtable), but we could just as well explicitly give the table name each time.\\nLet's start by taking a look at the table schema:\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"01-notebook.ipynb\\\".\\nThe first task is:\\nDeutsche Konstituentenb\u00e4ume\\nZun\u00e4chst m\u00fcssen die Pfade korrekt gesetzt werden.\\nPATH_TO_CORE : Pfad zum CoreNLP\\nPATH_TO_GER_MODEL : Pfad zum deutschen Modell\\nVerwenden Sie am besten voll ausgeschriebene absolute Pfade.\\nCan you write Python code for it?\\n\",\"targets\":\"\\nPATH_TO_CORE = \\\"...\\\"\\nPATH_TO_GER_MODEL = \\\"...\\\"\\n\\njar = os.path.join(\\n PATH_TO_CORE,\\n \\\"stanford-corenlp-3.8.0.jar\\\"\\n)\\nger_model = os.path.join(\\n PATH_TO_GER_MODEL,\\n \\\"stanford-german-corenlp-2017-06-09-models.jar\\\"\\n)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"EPSI\\/I4\\/Business Intellgence\\/EP4\\/premier_pas_algo_titanic.ipynb\\\".\\nThe first task is:\\nOn va donc remplacer cette valeur manquante : \\n - Soit on choisit une valeur subjective\\n - On prend la valeur la plus ad\u00e9quate sur notre jeux de donn\u00e9es train (comme la m\u00e9diane)\\nCan you write Python code for it?\\n\",\"targets\":\"\\n# Lib de calcul\\nimport numpy as np\\n\\nmediane_fare = np.median(train['Fare'])\\nprint \\\"La medianne de Fare dans notre jeu de donn\u00e9es train est : \\\" + str(mediane_fare)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# Transformation\\n# This transformer can be loaded with a csv_file at construction \\n# >> mypandas = PandasTrasformer(regex = [], csv_file = 'file.csv')), or \\n# or after creation of object \\n# myfile = {'csv_file': '..\\/input\\/bank-additional\\/bank-additional\\/bank-additional.csv'}\\n# >>>mypandas.set_params(**myfile)\\n# or when executing transformation\\n# >>> mypandas.transform(csv_file = 'file.csv')\\n\\nclass PandasTransformer(BaseEstimator, TransformerMixin):\\n \\\"\\\"\\\"\\n To load the data into a Dataframe and transform its content\\n according to a regular expression substitution.\\n\\n Usage\\n -----\\n >>> myregex = RegexTransformer(regex = [(r'\\\\_','.')])\\n >>> df = myregex.transform(csv_file = 'data.csv')\\n\\n \\\"\\\"\\\"\\n\\n def __init__(self, regex:List[Tuple] = None, csv_file:str = None) -> None:\\n \\\"\\\"\\\"\\n Select which dataframe will be used to load pandas\\n DataFrame, and replace content with . into _\\n \\n Parameter\\n ---------\\n regex: list\\n a list of regular expression substitutions \\n (e.g., [(r'\\\\.','_'),(r'\\\\_)]).\\n \\n csv_file: str\\n The path to a DataFrame object.\\n \\\"\\\"\\\"\\n self.regex = regex # for set_params() and get_params()\\n self.csv_file = csv_file # for set_params() and get_params()\\n\\n \\n def fit(self, X, y = None):\\n \\\"\\\"\\\"\\n Nothing to do here\\n \\\"\\\"\\\"\\n return self\\n \\n def transform(self, csv_file:str = None) -> pd.DataFrame:\\n \\\"\\\"\\\"\\n Returns a pandas DataFrame with substituted\\n regular expression (with . into _)\\n \\n Parameter\\n ---------\\n csv_file: str\\n The path to a DataFrame object.\\n \\n Returns\\n -------\\n A Panda DataFrame with regular expressions altered\\n \\\"\\\"\\\"\\n if csv_file is not None: \\n self.csv_file = csv_file\\n \\n df = pd.read_csv(self.csv_file)\\n \\n if self.regex is not None: # defined at construction\\n for exp,...\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nCustom transformers inherit from the TransformerMixin class to obtain the fit_transform() method. We also get get_params() and set_params() when inherinting from the BaseEstimator class.\\nThe first transformer will create directly a Pandas object and perform some regular expression operations. The rest of the transformers will operate on a pandas DataFrame objects.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Code Libraries can be imported via a Code Cell\\n\",\"targets\":\"import numpy as np\\n\\nnp.pi\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# Filter out that review with 0 length\\nreviews_ints = [r for r in reviews_ints if len(r) > 0]\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nOkay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.\\n\\nExercise: First, remove the review with zero length from the reviews_ints list.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"4) Print a list of Lil's that are more popular than Lil' Kim.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\ntarget_score = 72 \\n#PART ONE: INITIAL CONDITON\\n\\nsecond_best_artists = []\\n#AGGREGATION PROBLEM - when you're looping through a series of objects and you someitmes you want to add some \\n#of those objects to a DIFFERENT list\\nfor artists in artists:\\n print(\\\"Looking at\\\", artist['name'], \\\"who has a popularity of\\\", artist['popularity'])\\n #PART TWO: CONDITONAL - when we want to add someone to our list\\n if artist['popularity'] == 72:\\n #PART THREE: THE CHANGE - add artist to our list \\n second_best_artists.append(artist['name'])\\n \\nprint(\\\"OUR SECOND BEST ARTISTS ARE:\\\")\\nfor artist in second_best_artists:\\n print(artist)\\n \\nfor artist in artists:\\n #print(\\\"Looking at\\\", artist['name'])\\n if artist['name'] == \\\"Lil' Kim\\\":\\n print(\\\"Found Lil' Kim\\\")\\n print(artist['popularity'])\\n else: \\n pass\\n #print(\\\"Not Lil' Kim\\\")\\n\\nimport requests \\nresponse = requests.get('https:\\/\\/api.spotify.com\\/v1\\/search?query=Lil+&offset=0&limit=50&type=artist&market=US')\\ndata = response.json()\\ndata.keys()\\nartist_data = data['artists']['items']\\nfor artist in artist_data:\\n print(artist['name'], artist['popularity'], artist['genres'])\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Clustering\\nUse KMeans module from SciKit-Learn to cluster your sample into three groups (WT, KO, HET). Be careful, your samples may have less than three groups. So always check the diff plots first.\\n\",\"targets\":\"import sklearn.cluster as sc\\nfrom IPython.display import display\\n\\nmat = dfdif.T\\nhc = sc.KMeans(n_clusters=3)\\nhc.fit(mat)\\n\\nlabels = hc.labels_\\nresults = pd.DataFrame([dfdif.T.index,labels])\\ndisplay(results.loc[:0,results.iloc[1]==0])\\ndisplay(results.loc[:0,results.iloc[1]==1])\\ndisplay(results.loc[:0,results.iloc[1]==2])\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"#get a single variables value\\nprint(x.varValue)\\n\\n#or get all the variables\\nfor v in prob.variables():\\n print(v, v.varValue)\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\n7. Get the results\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# initialize extra trees model\\nert_model1 = H2ORandomForestEstimator(\\n ntrees=10000, \\n max_depth=10, \\n col_sample_rate_per_tree=0.1,\\n sample_rate=0.8,\\n stopping_rounds=50,\\n score_each_iteration=True,\\n nfolds=3,\\n keep_cross_validation_predictions=True,\\n seed=12345,\\n histogram_type='random') # <- this is what makes it ERT instead of RF\\n\\n# train ert model\\nert_model1.train(\\n x=encoded_combined_nums,\\n y='SalePrice',\\n training_frame=train,\\n validation_frame=valid)\\n\\n# print model information\\/create submission\\nprint(ert_model1)\\nert_preds1_val = ert_model1.predict(valid)\\nranked_preds_plot('SalePrice', valid, ert_preds1_val) # valid RMSE not so hot ...\\nert_preds1_test = ert_model1.predict(test)\\ngen_submission(ert_preds1_test) # 0.14855 public leaderboard\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nExtremely random trees model - typically not tuned as much as GBM\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Answer:\\nthis model used to predict have mostly captures the cariation of the target variable, with a r2_score 0.923. \\nImplementation: Shuffle and Split Data\\nYour next implementation requires that you take the Boston housing dataset and split the data into training and testing subsets. Typically, the data is also shuffled into a random order when creating the training and testing subsets to remove any bias in the ordering of the dataset.\\nFor the code cell below, you will need to implement the following:\\n- Use train_test_split from sklearn.cross_validation to shuffle and split the features and prices data into training and testing sets.\\n - Split the data into 80% training and 20% testing.\\n - Set the random_state for train_test_split to a value of your choice. This ensures results are consistent.\\n- Assign the train and testing splits to X_train, X_test, y_train, and y_test.\\n\",\"targets\":\"# TODO: Import 'train_test_split'\\nfrom sklearn.cross_validation import train_test_split \\n\\n# TODO: Shuffle and split the data into training and testing subsets\\nX_train, X_test, y_train, y_test = train_test_split(\\n features, prices, test_size=0.2, random_state=50)\\n\\n# Success\\nprint \\\"Training and testing split was successful.\\\"\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"%%matlab\\nfor num_temps = 1:length(t1) \\n line([t1(num_temps) t1(num_temps)], [0 0.5]) \\nend\\nxlabel('Temp (sec)')\\n%Idem pour l\u2019axe des y:\\nylabel('Essai #')\\n%Enfin, on fixe les limites de l\u2019axe des y\\nylim([0 3])\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\n
    \\n\\n

  1. Faire une nouvelle figure o\u00f9 chaque barre du diagramme a une hauteur\\n de 0.5, plut\u00f4t que 1. Sauvegardez ce fichier sous le nom \\\"figure_dispersion_lignes.png\\\". <\\/li><\\/h4>\\n<\\/ol><\\/font>\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Use IBM Decision Optimization CPLEX Modeling for Python<\\/span>\\nLet's create the optimization model to select the best ways to contact customers and stay within the limited budget.\\nStep 1: Set up the prescriptive engine<\\/span>\\n\\nSubscribe to the Decision Optimization on Cloud solve service here.\\nGet the service URL and your personal API key and enter your credentials here:\\n\\nFirst import docplex and set the credentials to solve the model using IBM ILOG CPLEX Optimizer on Cloud.\\ndocplex is already installed with its dependancies in XSD.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nimport sys\\nimport docplex.mp\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"And let's check how accurate it is by testing it with the remaining 20% of the data set.\\n\",\"targets\":\"guesses = []\\ntargets = []\\nrnge = [1438,1797] \\nnumber_correct = 0\\nrnge = range(rnge[0],rnge[1])\\nfor n in rnge:\\n\\n guesses.append(core.output_reader(core.propforward(donnel_v2, digits.data[n])[-1]))\\n targets.append(digits.target[n])\\n\\nfor guess, target in zip(guesses, targets):\\n if guess == target:\\n number_correct+=1\\nnumber_total = len(rnge)\\nprint(number_correct\\/number_total*100)\\nprint(\\\"%d\\/%d\\\" %(number_correct, number_total))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Because we are working with full length unrolls, this gradient estimator has no state -- there is nothing to keep track of truncation to truncation.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\ngradient_estimator_state = gradient_estimator.init_worker_state(\\n worker_weights, key=key)\\ngradient_estimator_state\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# make slightly wider figure\\nwide_figsize = plt.rcParams['figure.figsize'].copy()\\nwide_figsize[0] *= 1.4\\nplt.figure(figsize=wide_figsize)\\nplt.scatter(x, y)\\nplt.xlabel('year')\\nplt.ylabel('summer temperature at Kilpisjarvi');\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nPlot the data\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"AES\\nNo, we are not going to give an implementation of AES here.\\nThe algorithm is very complicated, and a real implementation would go well beyond the scope of this notebook.\\nDiffie-Hellman Key Exchange\\nThe Diffie-Hellman key exchange scheme is based on the discrete logarithm problem.\\nThat is, it is very difficult to find $x$ from the value of $g^x \\\\bmod p$, if $g$ and $p$ are chosen carefully.\\nFor example, see what is happening for $g = 2$ and $p = 13$.\\n$g^x$ is predictable, but $g^x \\\\bmod p$ does not show any pattern.\\nNote that, in what follows, we are going to use Python's function pow(). \\nAlthough Python has an operator for exponentiation (**), pow() is useful because it can either be called as pow(x, y) to return $x^y$, or pow(x, y, p) to return $x^y \\\\bmod p$.\\n\",\"targets\":\"g = 2\\np = 13\\n\\nfor x in range(1, 13):\\n print(pow(g, x), pow(g, x, p))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"As can be seen, the $k$ nearest neighbours model is able to predict ratings to within \u00b10.88, with a standard deviation of 0.97. While this error is not small, it's not so large that it won't be useful. Further impovements can be made by filling the missing values in a different way or providing more ratings.\\nMaking predictions\\nNow that we have a final model, we can make recommendations about films we haven't rated:\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\npredictions = pd.Series()\\nfor film in df.columns:\\n if film in y.index:\\n continue # If we've already rated the film, skip it\\n predictions[film] = clf.predict(df.loc[:, [film]].T)[0]\\n\\npredictions.sort_values(ascending=False).head(10)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"#print(ufo_data['City'].unique()[0:1000])\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nLet us explore how the city column looks. It is important to take a look at lots of values in the dataset to check for anomalies or data with noise. For example the city data has noise with text containing additional information within () and other such noises.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Creating a model galaxy image\\nWe assume that the image size is 240x240 pixels, and that the \\\"true\\\" light distribution is described by a 2D Sersic profile with the following parameters:\\n\",\"targets\":\"ny, nx = 240, 240\\ny, x = np.mgrid[0:ny, 0:nx]\\nsersic_model = models.Sersic2D(\\n amplitude=1, r_eff=20, n=2.5, x_0=120.5, y_0=96.5,\\n ellip=0.5, theta=-0.5)\\nimage = sersic_model(x, y)\\nplt.imshow(image, cmap='gray', origin='lower',\\n norm=simple_norm(image, stretch='log', log_a=10000))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Demo-Test.ipynb\\\".\\nThe first task is:\\nVersion 7 - Improving Time Complexity - O^n\\nCan you write Python code for it?\\n\",\"targets\":\"\\ndef solution_7(A):\\n\\n left_sum = 0\\n\\n right_sum = sum(A[1:])\\n\\n len_arr = len(A)\\n\\n try:\\n if len_arr <= 1:\\n if len_arr == 1:\\n return 0\\n else:\\n return -1\\n\\n if left_sum == right_sum:\\n return 0\\n\\n if sum(A[:-1]) == 0:\\n return len_arr-1\\n\\n if len(A) == abs(sum(A[:])):\\n if len(A) % 2:\\n return len(A)\\/2\\n else:\\n return -1\\n\\n if len(A) >= 0 and len(A) <= 100000:\\n left_sum = A[0]\\n \\n for i,val in enumerate(A[1:]):\\n right_sum -= val\\n \\n #left_sum += val\\n \\n if left_sum == right_sum:\\n return i+1\\n \\n left_sum +=val\\n \\n if i >= len_arr:\\n return -1\\n\\n else:\\n return -1\\n\\n\\n except (ValueError, RuntimeError) as err:\\n\\n return err\\n\\nprint solution_7(Ax)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"By default, the remove() function removes the i-th particle from the particles array, and shifts all particles with higher indices down by 1. This ensures that the original order in the particles array is preserved (e.g., to help with output).\\nBy running through the planets in reverse order above, we are guaranteed that when a particle with index i gets removed, the particle replacing it doesn't need to also be removed (we already checked it).\\nIf you have many particles and many removals (or you don't care about the ordering), you can save the reshuffling of all particles with higher indices with the flag keepSorted=0:\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nsim.remove(2, keepSorted=0)\\nprint(\\\"Number of particles after cut = {0}\\\".format(sim.N))\\nprint(\\\"IDs of remaining particles = {0}\\\".format([p.id for p in sim.particles]))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"
    \\n\",\"targets\":\"# Import Titanic data (local CSV)\\ntitanic = h2o.import_file(\\\"kaggle_titanic.csv\\\")\\n\\n# Explore the dataset using various functions\\ntitanic.head(10)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Build a Lane Finding Pipeline\\n\",\"targets\":\"gray = grayscale(image)\\n\\n# Define a kernel size and apply Gaussian smoothing\\nkernel_size = 9\\nblur_gray = gaussian_blur(gray, kernel_size)\\n\\n# Define our parameters for Canny and apply\\nlow_threshold = 50\\nhigh_threshold = 100\\nedges = canny(blur_gray, low_threshold, high_threshold)\\n\\n# Next we'll create a masked edges image using cv2.fillPoly() \\n\\n# This time we are defining a four sided polygon to mask\\nimshape = image.shape\\nvertices = np.array([[(0,imshape[0]),(450, 320), (490, 320), (imshape[1],imshape[0])]], dtype=np.int32)\\n\\nmasked_edges = region_of_interest(edges, vertices)\\n\\n# Define the Hough transform parameters\\n# Make a blank the same size as our image to draw on\\nrho = 1 # distance resolution in pixels of the Hough grid\\ntheta = np.pi\\/180 # angular resolution in radians of the Hough grid\\nthreshold = 50 # minimum number of votes (intersections in Hough grid cell)\\nmin_line_length = 1 #minimum number of pixels making up a line\\nmax_line_gap = 250 # maximum gap in pixels between connectable line segments\\nline_image = np.copy(image)*0 # creating a blank to draw lines on\\n\\n# Run Hough on edge detected image\\n# Output \\\"lines\\\" is an array containing endpoints of detected line segments\\nlines = cv2.HoughLinesP(masked_edges, rho, theta, threshold, np.array([]),\\n min_line_length, max_line_gap)\\n\\n# Iterate over the output \\\"lines\\\" and draw lines on a blank image\\nfor line in lines:\\n for x1,y1,x2,y2 in line:\\n cv2.line(line_image,(x1,y1),(x2,y2),(255,0,0),10)\\n\\n# Create a \\\"color\\\" binary image to combine with line image\\ncolor_edges = np.dstack((edges, edges, edges)) \\n\\n# Draw the lines on the edge image\\nlines_edges = cv2.addWeighted(color_edges, 0.8, line_image, 1, 0) \\nplt.imshow(lines_edges)\\n\\nplt.imshow(gray,cmap='gray')\\ncv2.imwrite('test_images_output\\/gray.jpg', gray)\\n\\nplt.imshow(blur_gray,cmap='gray')\\n\\nplt.imshow(edges,cmap='gray')\\ncv2.imwrite('test_images_output\\/edges.jpg', edges)\\n\\nvertices\\n\\nplt.imshow(masked_edges,cmap='gray')\\ncv2.imwrite('test_images_output\\/edges.jpg',...\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# This is the number of data points you'll send to the What-if Tool\\nWHAT_IF_TOOL_SIZE = 500\\n\\nfrom witwidget.notebook.visualization import WitConfigBuilder, WitWidget\\n\\n\\ndef create_list(ex_dict):\\n new_list = []\\n for i in feature_names:\\n new_list.append(ex_dict[i])\\n return new_list\\n\\n\\ndef example_dict_to_input(example_dict):\\n return {\\\"dense_input\\\": create_list(example_dict)}\\n\\n\\nfrom collections import OrderedDict\\n\\nwit_data = test_data.iloc[:WHAT_IF_TOOL_SIZE].copy()\\nwit_data[\\\"duration\\\"] = test_labels[:WHAT_IF_TOOL_SIZE]\\nwit_data_dict = wit_data.to_dict(orient=\\\"records\\\", into=OrderedDict)\\n\\nconfig_builder = (\\n WitConfigBuilder(wit_data_dict)\\n .set_ai_platform_model(\\n PROJECT_ID, MODEL, VERSION, adjust_example=example_dict_to_input\\n )\\n .set_target_feature(\\\"duration\\\")\\n .set_model_type(\\\"regression\\\")\\n)\\n\\nWitWidget(config_builder)\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nUnderstanding AI Explanations with the What-If Tool\\nIn this section you'll use the What-If Tool to better understand how your model is making predictions. See the cell below the What-if Tool for visualization ideas.\\nThe What-If-Tool expects data with keys for each feature name, but your model expects a flat list. The functions below convert data to the format required by the What-If Tool.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# Plotting lines surrounding each of the ROIs\\nplt.figure(figsize=(7, 7))\\n\\nfor i_cell in range(len(Coor)):\\n x = Coor[i_cell, 0][0]\\n y = Coor[i_cell, 0][1]\\n # Plot border around cells\\n plt.plot(x, y)\\n\\n# Invert the y-axis because image co-ordinates are labelled from top-left\\nplt.gca().invert_yaxis()\\nplt.show()\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nShow detected cells\\nLet's render the ROIs using matplotlib.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Day_00\\/03_Functions\\/FunctionsSolutions.ipynb\\\".\\nThe first task is:\\nWhat if we the slope and intercept to be default values of 2 and 0, respectively, but changable sometimes? \\nHint: try setting the parameters to equal the default values!\\nCan you write Python code for it?\\n\",\"targets\":\"\\ndef line(x,m=2,b=0):\\n \\\"\\\"\\\"Return the y value of a line given x and optionally the slope m and intercept b\\\"\\\"\\\"\\n y = add(m*x,b)\\n return y\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"FiveThirtyEightRiddler\\/2018-04-06\\/vandal_dates.ipynb\\\".\\nThe first task is:\\nWhat year will see the most vandalism?\\nCan you write Python code for it?\\n\",\"targets\":\"\\ndate_counts = Counter(d.year for d in anarchy_dates)\\n\\n_, max_attacks = date_counts.most_common()[0]\\n_, min_attacks = date_counts.most_common()[-1]\\n\\nfor year, attacks in date_counts.items():\\n if attacks == max_attacks:\\n print(f'{attacks} attacks in year {year}')\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# WRONG\\ndef carrying_capacity(system):\\n system = System(alpha=0.025, beta=-0.0018)\\n K = -system.alpha \\/ system.beta\\n return K\\n \\nsys1 = System(alpha=0.03, beta=-0.002)\\npop = carrying_capacity(sys1)\\nprint(pop)\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nThis version actually works, but it is not as versatile as it could be.\\nIf there are several System objects, this function can only work with one of them, and only if it is named sys1.\\nDysfunction #2: Clobbering the parameters. When people first learn\\nabout parameters, they often write functions like this:\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Computing a growth rate of triples w.r.t. the number of randomly selected relations\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\ndef random_relation_triple(T, n_samples=100):\\n n_r = len(T); n_e = T[0].shape[0]\\n n_relations = list()\\n n_triples = list()\\n n_samples = 10000 # repeat counting n_sample times\\n\\n shuffled_r = np.arange(n_r)\\n\\n tic = time.time()\\n for s in range(n_samples):\\n r_samples = np.random.randint(1, n_r)\\n np.random.shuffle(shuffled_r)\\n\\n cnt = 0\\n for ri in shuffled_r[:r_samples]:\\n cnt += T[ri].nnz\\n\\n n_relations.append(r_samples)\\n n_triples.append(cnt)\\n print('Elpased time', time.time()-tic)\\n return n_relations, n_triples\\n\\nn_relations, n_triples = random_relation_triple(T, n_samples = 10000)\\n\\npopt, pcov = curve_fit(func, n_relations, n_triples) \\nfitted_t = func(np.arange(n_r), *popt) \\n\\nplt.figure(figsize=(8,6))\\nplt.scatter(n_relations[::10], n_triples[::10], label='empirical')\\nplt.title('# of relations vs # of triples')\\nplt.xlabel('# relations')\\nplt.ylabel('# triples')\\n\\nfirst_order = [x for x in range(n_r)]\\nsecond_order = [x**2 for x in range(n_r)]\\nplt.plot(first_order, label='$y=x$')\\nplt.plot(second_order, label='$y=x^2$')\\nplt.plot(fitted_t, label='$y=%.2f x^{%.2f} + %.2f$' % (popt[2], popt[0], popt[1]))\\nplt.legend(loc='upper left')\\nplt.ylim([0, np.max(n_triples)+100])\\nplt.xlim([0, np.max(n_relations)])\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Check the primers from Assembly 4 into the database.\\n\",\"targets\":\"from tinydb import TinyDB, Query\\n\\nprimerdb = TinyDB('..\\/..\\/data\\/primers.db.json')\\n\\ndef checkin_primers(primers, database):\\n \\\"\\\"\\\"\\n Checks the primers into the database.\\n \\\"\\\"\\\"\\n for primer in primers:\\n primer['received'] = False\\n p = Query()\\n if not database.contains(p.sequence == primer['sequence']):\\n database.insert(primer)\\n print('Inserting {primer}...'.format(primer=primer))\\n \\ncheckin_primers(assembly4_primers, primerdb)\\n\\ncheckin_primers(assembly5_primers, primerdb)\\n\\ncheckin_primers(assembly6_primers, primerdb)\\n\\nlen(primerdb)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"%%bash\\n\\nexport PROJECT_ID=$(gcloud config list --format 'value(core.project)')\\nexport BENCHMARK_DATASET_NAME=tpcds_2t_baseline # Name of the dataset you want to create\\n\\n## Create a BigQuery dataset for tpcds_2t_flat_part_clust if it doesn't exist\\ndatasetexists=$(bq ls -d | grep -w $BENCHMARK_DATASET_NAME)\\n\\nif [ -n \\\"$datasetexists\\\" ]; then\\n echo -e \\\"BigQuery dataset $BENCHMARK_DATASET_NAME already exists, let's not recreate it.\\\"\\n\\nelse\\n echo \\\"Creating BigQuery dataset titled: $BENCHMARK_DATASET_NAME\\\"\\n \\n bq --location=US mk --dataset \\\\\\n --description 'Benchmark Dataset' \\\\\\n $PROJECT:$BENCHMARK_DATASET_NAME\\n echo \\\"\\\\nHere are your current datasets:\\\"\\n bq ls\\nfi\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nIf you're an experienced BigQuery user, you likely have seen these same metrics in the Web UI as well as highlighted in the red box below:\\n\\nIt's a matter of preference whether you do your work in the Web UI or the command line -- each has it's advantages.\\nOne major advantage of using the bq command line interface is the ability to create a script that will run the remaining 98 benchmark queries for us and log the results. \\nCopy the dw-workshop dataset into your own GCP project\\nWe will use the new BigQuery Transfer Service to quickly copy our large dataset from the dw-workshop GCP project into your own so you can perform the benchmarking. \\nCreate a new baseline dataset in your project\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Add ServiceType constraint.\\n\",\"targets\":\"val = 'opendap'\\nfilter2 = fes.PropertyIsLike(propertyname='apiso:ServiceType',literal=('*%s*' % val),\\n escapeChar='\\\\\\\\',wildCard='*',singleChar='?')\\nfilter_list = [fes.And([filter1, filter2, bbox_filter])]\\ncsw.getrecords2(constraints=filter_list, maxrecords=1000, esn='full')\\n\\nprint(len(csw.records.keys()))\\nfor rec in list(csw.records.keys()):\\n print('title:'+csw.records[rec].title) \\n print('identifier:'+csw.records[rec].identifier)\\n print('modified:'+csw.records[rec].modified)\\n print(' ')\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# Several pixel types, some make sense in this case (vector types) and some are just show\\n# that the user's choice will force the pixel type even when it doesn't make sense\\n# (e.g. sitkVectorUInt16 or sitkUInt8).\\npixel_types = {\\n \\\"sitkUInt8\\\": sitk.sitkUInt8,\\n \\\"sitkUInt16\\\": sitk.sitkUInt16,\\n \\\"sitkFloat64\\\": sitk.sitkFloat64,\\n \\\"sitkVectorUInt8\\\": sitk.sitkVectorUInt8,\\n \\\"sitkVectorUInt16\\\": sitk.sitkVectorUInt16,\\n \\\"sitkVectorFloat64\\\": sitk.sitkVectorFloat64,\\n}\\n\\n\\ndef pixel_type_dropdown_callback(pixel_type, pixel_types_dict):\\n # specify the file location and the pixel type we want\\n img = sitk.ReadImage(fdata(\\\"SimpleITK.jpg\\\"), pixel_types_dict[pixel_type])\\n\\n print(img.GetPixelIDTypeAsString())\\n print(img[0, 0])\\n plt.imshow(sitk.GetArrayViewFromImage(img))\\n plt.axis(\\\"off\\\")\\n\\n\\ninteract(\\n pixel_type_dropdown_callback,\\n pixel_type=list(pixel_types.keys()),\\n pixel_types_dict=fixed(pixel_types),\\n);\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nRead an image in JPEG format and cast the pixel type according to user selection.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Create a Generative network\\nNext, let's build an RNN that produces text, and train it using (a) a pure dictionary look-up, and (b) the correctness signal from the Discriminator above.\\nPlan of attack : \\n\\nCreate a GRU that outputs a character probability distribution for every time step\\nRun the RNN several times :\\neach time is an additional character input longer \\nwith the next character chosen according to the probability distribution given\\nand then re-run with the current input words (up to that point)\\nStop adding characters when they've all reached 'space'\\n\\nThis seems very inefficient (since the first RNN steps are being run multiple times on the same starting letters), but is the same as in https:\\/\\/github.com\\/Lasagne\\/Recipes\\/blob\\/master\\/examples\\/lstm_text_generation.py\\n\",\"targets\":\"# Let's pre-calculate the logs of the bigram frequencies, since they may be mixed in below\\nbigram_min_freq = 1e-10 # To prevent underflow in log...\\nbigram_freq_log = np.log( bigram_freq + bigram_min_freq ).astype('float32')\\n\\n# Symbolic variables for input. In addition to the usual features and target,\\ngen_input_sym = theano.tensor.ftensor3()\\ngen_mask_sym = theano.tensor.imatrix()\\n\\ngen_words_target_sym = theano.tensor.imatrix() # characters generated (as character indicies)\\n\\n# probabilities of being from the dictionary (i.e. a single column matrix)\\ngen_valid_target_sym = theano.tensor.fmatrix( )\\n\\n# This is a single mixing parameter (0.0 = pure RNN, 1.0=pure Bigram)\\ngen_bigram_overlay = theano.tensor.fscalar()\\n\\n# This is 'current' since it reflects the bigram field as far as it is known during the call\\ngen_bigram_freq_log_field = theano.tensor.ftensor3()\\n\\ngen_input = lasagne.layers.InputLayer( (None, None, CHARS_SIZE) ) # batch_size, sequence_len, chars_size\\ngen_mask = lasagne.layers.InputLayer( (None, None, CHARS_SIZE) ) # batch_size, sequence_len, chars_size\\n\\n#gen_rnn1_t0 = lasagne.layers.InputLayer( (None, RNN_HIDDEN_SIZE) ) # batch_size, RNN_hidden_size=chars_size\\n\\n#n_batch, n_time_steps, n_features = gen_input.input_var.shape\\nn_batch, n_time_steps, n_features = gen_input_sym.shape\\n\\ngen_rnn1 = lasagne.layers.GRULayer(gen_input,\\n num_units=RNN_HIDDEN_SIZE,\\n gradient_steps=-1,\\n grad_clipping=GRAD_CLIP_BOUND,\\n #hid_init=disc_rnn1_t0,\\n hid_init=lasagne.init.Normal(),\\n learn_init=True,\\n mask_input=gen_mask,\\n only_return_final=False, # Need all of the output states\\n )\\n\\n# Before the decoder layer, we need to reshape the sequence into the batch dimension,\\n# so that timesteps are decoded independently.\\ngen_reshape = lasagne.layers.ReshapeLayer(gen_rnn1, (-1, RNN_HIDDEN_SIZE) )\\n\\ngen_prob_raw = lasagne.layers.DenseLayer(gen_reshape, \\n num_units=CHARS_SIZE, \\n ...\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"** Remark **\\n'blocking' is not an optimization level -- it rather identifies a specific compilation pass. In other words, the advanced mode defines an ordered sequence of passes, and blocking is one such pass.\\n****\\nThe blocking pass creates additional loops over blocks. In this simple Operator there's just one loop nest, so only a pair of additional loops are created. In more complex Operator's, several loop nests may individually be blocked, whereas others may be left unblocked -- this is decided by the Devito compiler according to certain heuristics. The size of a block is represented by the symbols x0_blk0_size and y0_blk0_size, which are runtime parameters akin to nthreads. \\nBy default, Devito applies 2D blocking and sets the default block shape to 8x8. There are two ways to set a different block shape:\\n\\npassing an explicit value. For instance, below we run with a 24x8 block shape\\n\\nop1_omp.apply(..., x0_blk0_size=24)\\n\\nletting the autotuner pick up a better block shape for us. There are several autotuning modes. A short summary is available here\\n\\nop1_omp.apply(..., autotune='aggressive')\\nLoop blocking also provides two optimization options:\\n\\nblockinner={False, True} -- to enable 3D (or any nD, n>2) blocking\\nblocklevels={int} -- to enable hierarchical blocking, to exploit multiple levels of the cache hierarchy \\n\\nIn the example below, we construct an Operator with six-dimensional loop blocking: the first three loops represent outer blocks, whereas the second three loops represent inner blocks within an outer block.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nop1_omp_6D = Operator(eq, opt=('blocking', {'blockinner': True, 'blocklevels': 2, 'openmp': True}))\\n# print(op1_omp_6D) # Uncomment to see the *whole* generated code\\nprint_kernel(op1_omp_6D)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Construct the task requirements\\nNext, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion.\\nThe minimal fields you need to specify are:\\n\\nbudget_milli_node_hours: The maximum time to budget (billed) for training the model, where 1000 = 1 hour.\\nmodel_type: The type of deployed model:\\nCLOUD_HIGH_ACCURACY_1: For deploying to Google Cloud and optimizing for accuracy.\\nCLOUD_LOW_LATENCY_1: For deploying to Google Cloud and optimizing for latency (response time),\\n\\nFinally, create the pipeline by calling the helper function create_pipeline, which returns an instance of a training pipeline object.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nPIPE_NAME = \\\"unknown_pipe-\\\" + TIMESTAMP\\nMODEL_NAME = \\\"unknown_model-\\\" + TIMESTAMP\\n\\ntask = json_format.ParseDict(\\n {\\\"budget_milli_node_hours\\\": 2000, \\\"model_type\\\": \\\"CLOUD_LOW_ACCURACY_1\\\"}, Value()\\n)\\n\\nresponse = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Python\\/2016-07-25\\/aula3-parte1-numpy.ipynb\\\".\\nThe first task is:\\nnumpy.linspace - gera n\u00fameros uniformemente espa\u00e7ados ao longo de um intervalo especificado. \\nlinspace(start, stop, num=50, retstep=True)\\nstart - valor de inicio da sequ\u00eancia.\\nstop - o valor final da sequ\u00eancia.\\nnum - numero da amostra a ser gerada (50 \u00e9 padr\u00e3o).\\nretstep - retorna o espa\u00e7amento (False \u00e9 padr\u00e3o).\\nCan you write Python code for it?\\n\",\"targets\":\"\\nnp.linspace(0.5, 4.0, num=5, retstep=True)\\n\\nnp.linspace(2, 3, num=6, retstep=True) #(3-2)\\/(6-1)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Comando de magic %cython\\n```{margin}\\nVer extensions-bundled-with-ipython para extensiones que antes se inclu\u00edan en Ipython.\\n```\\nAl instalar Cython se incluye tal comando. Al ejecutarse crea el archivo .pyx, lo compila con setup.py e importa en el notebook.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n%load_ext Cython\\n\\n%%cython\\ndef Rcf(f,a,b,n):\\n \\\"\\\"\\\"\\n Compute numerical approximation using rectangle or mid-point\\n method in an interval.\\n Nodes are generated via formula: x_i = a+(i+1\\/2)h_hat for\\n i=0,1,...,n-1 and h_hat=(b-a)\\/n\\n Args:\\n \\n f (float): function expression of integrand.\\n \\n a (float): left point of interval.\\n \\n b (float): right point of interval.\\n \\n n (int): number of subintervals.\\n \\n Returns:\\n \\n sum_res (float): numerical approximation to integral\\n of f in the interval a,b\\n \\\"\\\"\\\"\\n h_hat = (b-a)\\/n\\n nodes = [a+(i+1\\/2)*h_hat for i in range(n)]\\n sum_res = 0\\n for node in nodes:\\n sum_res = sum_res+f(node)\\n return h_hat*sum_res\\n\\nstart_time = time.time()\\nres = Rcf(f, a, b,n)\\nend_time = time.time()\\n\\nsecs = end_time-start_time\\nprint(\\\"Rcf tom\u00f3\\\",secs,\\\"segundos\\\" )\\n\\nobj, err = quad(f, a, b)\\n\\nprint(res == approx(obj))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# accessing Twitter data through the authentification keys defined in the previous cell\\napi = tweepy.API(auth) \\nuser = api.get_user(id = \\\"realDonaldTrump\\\") # Since I am interested in Trump's tweets, we access his account\\n\\n# Here I just print out the information on the account to ensure it is the right one\\nprint('Name: ' + user.name)\\nprint('Location: ' + user.location)\\nprint('Friends: ' + str(user.friends_count))\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nHere, I just ensure that I am looking at the right data, and that I can access the twitter account I wish to gather data on.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Analyse already evaluated components\\n\",\"targets\":\"import unidecode\\nimport numpy as np\\nimport matplotlib.pyplot as plt\\n \\ndef plot_polarity_subjectivity(listed_name_on_database):\\n pol = []\\n sub = []\\n \\n articles_of_a_newspaper = db.child(str(\\\"articles\\/\\\" + listed_name_on_database)).get()\\n articles = articles_of_a_newspaper.val()\\n \\n for article_no in range(len(articles)):\\n data = list(articles.items())[article_no][1]\\n pol.append(abs(float(data[\\\"polarity\\\"])))\\n sub.append(float(data[\\\"subjectivity\\\"]))\\n \\n plt.scatter(pol,sub,[80\\/np.sqrt(len(pol))]*len(sub), alpha=0.7, label = listed_name_on_database)\\n \\n return np.column_stack((pol, sub))\\n \\n\\nplt.clf()\\nplt.figure(figsize=(12, 10))\\nplt.title(\\\"Scatter Plot (Articles)\\\")\\n\\nwebsites = [\\\"wwwchannelnewsasiacom\\\",\\\"wwwstraitstimescom\\\",\\\"wwwtnpsg\\\",\\\"wwwtodayonlinecom\\\",\\n \\\"sgnewsyahoocom\\\",\\\"sgfinanceyahoocom\\\",\\\"stompstraitstimescom\\\",\\\"mothershipsg\\\",\\n \\\"thehearttruthscom\\\",\\\"wwwtremerituscom\\\",\\\"yawningbreadwordpresscom\\\",\\n \\\"wwwtheonlinecitizencom\\\",\\\"wwwallsingaporestuffcom\\\",\\\"alvinologycom\\\",\\\"berthahensonwordpresscom\\\"]\\n\\ncentroid ={}\\nfor website in websites:\\n data = plot_polarity_subjectivity(website)\\n time.sleep(0.2)\\n centroid[website] = np.mean(data, axis=0)\\n\\nplt.legend(loc=4)\\nplt.xlabel(\\\"Polarity\\\")\\nplt.ylabel(\\\"Subjectivity\\\")\\n\\nplt.show()\\n\\nplt.clf()\\nplt.figure(figsize=(12, 10))\\nplt.title(\\\"Centroids (Sources)\\\")\\n\\nmothershipsg = centroid[\\\"wwwchannelnewsasiacom\\\"]\\nplt.scatter(mothershipsg[0],mothershipsg[1],label=\\\"wwwchannelnewsasiacom\\\")\\n#plt.annotate(\\\"wwwchannelnewsasiacom\\\",(mothershipsg[0],mothershipsg[1]))\\nmothershipsg = centroid[\\\"wwwstraitstimescom\\\"]\\nplt.scatter(mothershipsg[0],mothershipsg[1],label=\\\"wwwstraitstimescom\\\")\\n#plt.annotate(\\\"wwwstraitstimescom\\\",(mothershipsg[0],mothershipsg[1]))\\nmothershipsg = centroid[\\\"wwwtnpsg\\\"]\\nplt.scatter(mothershipsg[0],mothershipsg[1],label=\\\"wwwtnpsg\\\")\\n#plt.annotate(\\\"wwwtnpsg\\\",(mothershipsg[0],mothershipsg[1]))\\nmothershipsg =...\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"3. Extract Movie Embeddings\\n\",\"targets\":\"def find_embedding_tensor():\\n with tf.Session() as sess:\\n saver = tf.train.import_meta_graph(os.path.join(model_dir, 'model.ckpt-100000.meta'))\\n saver.restore(sess, os.path.join(model_dir, 'model.ckpt-100000'))\\n graph = tf.get_default_graph()\\n trainable_tensors = map(str, graph.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES))\\n for tensor in set(trainable_tensors):\\n print tensor\\n \\nfind_embedding_tensor()\\n\\ndef extract_embeddings():\\n \\n with tf.Session() as sess:\\n saver = tf.train.import_meta_graph(os.path.join(model_dir, 'model.ckpt-100000.meta'))\\n saver.restore(sess, os.path.join(model_dir, 'model.ckpt-100000'))\\n graph = tf.get_default_graph()\\n weights_tensor = graph.get_tensor_by_name('input_layer_1\\/movieId_embedding\\/embedding_weights:0')\\n weights = np.array(sess.run(weights_tensor))\\n \\n embeddings = {}\\n for i in range(weights.shape[0]):\\n embeddings[i] = weights[i]\\n \\n return embeddings\\n\\n\\nembeddings = extract_embeddings()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Align your images\\n\",\"targets\":\"!python align_images.py raw_images aligned_images\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"DS_ML_Jan4.ipynb\\\".\\nThe first task is:\\nMeasuring Thickness\\nCan you write Python code for it?\\n\",\"targets\":\"\\nasset=dataset.km.tail(100)\\nscores = pd.Series(index=asset.index)\\nfor date in rolling_means.index:\\n mavg_values = rolling_means.loc[date]\\n d = np.max(mavg_values) - np.min(mavg_values)\\n scores[date] = d\\n \\n# Normalize the score\\n(10 * scores).plot();\\nasset.plot()\\nplt.legend(['Signal', 'Asset Price']);\\nplt.show()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"7) Critics's Picks\\nHow many of the last 20 movies reviewed by the NYT were Critics' Picks? How about the last 40? The last 60?\\nTip: You really don't want to do this 3 separate times (1-20, 21-40 and 41-60) and add them together. What if, perhaps, you were able to figure out how to combine two lists? Then you could have a 1-20 list, a 1-40 list, and a 1-60 list, and then just run similar code for each of them.\\n\",\"targets\":\"#first 20 movies\\nmovie_response = requests.get('https:\\/\\/api.nytimes.com\\/svc\\/movies\\/v2\\/reviews\\/search.json?&api-key=3880684abea14d86b6280c6dbd80a793')\\nmovie_data = movie_response.json()\\n#print(movie_data)\\n#print(movie_data.keys())\\ncount = 0\\nmovie_result = movie_data['results']\\nfor i in movie_result:\\n #print(i)\\n #print(i.keys())\\n #print(item)\\n if i['critics_pick']:\\n count = count + 1\\nprint(\\\"Out of last 20 movies\\\", count, \\\"movies were critics picks\\\")\\n\\n#first 40 movies\\nmovie_response = requests.get('https:\\/\\/api.nytimes.com\\/svc\\/movies\\/v2\\/reviews\\/search.json?&&offset=20&api-key=3880684abea14d86b6280c6dbd80a793')\\nmovie_data = movie_response.json()\\n#print(movie_data)\\n#print(movie_data.keys())\\ncount_40 = 0\\nmovie_result = movie_data['results']\\nfor i in movie_result:\\n #print(i)\\n #print(i.keys())\\n #print(item)\\n if i['critics_pick']:\\n count_40 = count_40 + 1\\n#print(count_40)\\nlast_fourty = count + count_40\\nprint(\\\"Out of last 40 movies\\\", last_fourty, \\\"movies were critics picks\\\")\\n\\n#first 60 movies\\nmovie_response = requests.get('https:\\/\\/api.nytimes.com\\/svc\\/movies\\/v2\\/reviews\\/search.json?&offset=40&api-key=3880684abea14d86b6280c6dbd80a793')\\nmovie_data = movie_response.json()\\n#print(movie_data)\\n#print(movie_data.keys())\\ncount_60 = 0\\nmovie_result = movie_data['results']\\nfor i in movie_result:\\n #print(i)\\n #print(i.keys())\\n #print(item)\\n if i['critics_pick']:\\n count_60 = count_60 + 1\\n#print(count_60)\\nlast_sixty = last_fourty + count_60\\nprint(\\\"Out of last 60 movies\\\", last_sixty, \\\"movies were critics picks\\\")\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Lectures\\/Lecture18 -- Diffusive energy balance model.ipynb\\\".\\nThe first task is:\\nDiscuss these curves...\\nSuggestion of at least 3 different regimes with different slopes (cold, medium, warm).\\nUnbiased \\\"best fit\\\" is actually a poor fit over all the intermediate temperatures.\\nThe astute reader will note that... by taking the zonal average of the data before the regression, we are biasing this estimate toward cold temperatures. [WHY?]\\nLet's take these reference values:\\n$$ A = 210 ~ \\\\text{W m}^{-2}, ~~~ B = 2 ~ \\\\text{W m}^{-2}~^\\\\circ\\\\text{C}^{-1} $$\\nNote that in the global average, recall $\\\\overline{T_s} = 288 \\\\text{ K} = 15^\\\\circ\\\\text{C}$\\nAnd so this parameterization gives \\n$$ \\\\overline{\\\\text{OLR}} = 210 + 15 \\\\times 2 = 240 ~\\\\text{W m}^{-2} $$\\nAnd the observed global mean is $\\\\overline{\\\\text{OLR}} = 239 ~\\\\text{W m}^{-2} $\\nSo this is consistent.\\n\\n<\\/a>\\n5. The one-dimensional diffusive energy balance model\\n\\nPutting the above OLR parameterization into our budget equation gives\\n$$ C(\\\\phi) \\\\frac{\\\\partial T_s}{\\\\partial t} = (1-\\\\alpha) ~ Q - \\\\left( A + B~T_s \\\\right) + \\\\frac{D}{\\\\cos\u2061\\\\phi } \\\\frac{\\\\partial }{\\\\partial \\\\phi} \\\\left( \\\\cos\u2061\\\\phi ~ \\\\frac{\\\\partial T_s}{\\\\partial \\\\phi} \\\\right) $$\\nThis is the equation for a very important and useful simple model of the climate system. It is typically referred to as the (one-dimensional) Energy Balance Model.\\n(although as we have seen over and over, EVERY climate model is actually an \u201cenergy balance model\u201d of some kind)\\nAlso for historical reasons this is often called the Budyko-Sellers model, after Budyko and Sellers who both (independently of each other) published influential papers on this subject in 1969.\\nRecap: parameters in this model are\\n\\nC: heat capacity in J m$^{-2}$ \u00baC$^{-1}$\\nA: longwave emission at 0\u00baC in W m$^{-2}$\\nB: increase in emission per degree, in W m$^{-2}$ \u00baC$^{-1}$\\nD: horizontal (north-south) diffusivity of the climate system in W m$^{-2}$ \u00baC$^{-1}$\\n\\nWe also need to specify the albedo.\\nTune albedo formula to match observations\\nLet's go back to the NCEP Reanalysis data to see how planetary albedo actually varies as a...\\nCan you write Python code for it?\\n\",\"targets\":\"\\ndays = np.linspace(1.,50.)\\/50 * const.days_per_year\\nQann_ncep = climlab.solar.insolation.daily_insolation(lat_ncep, days ).mean(dim='day')\\nalbedo_ncep = 1 - ASR_ncep_annual \\/ Qann_ncep\\n\\nalbedo_ncep_global = np.average(albedo_ncep, weights=np.cos(np.deg2rad(lat_ncep)))\\n\\nprint( 'The annual, global mean planetary albedo is %0.3f' %albedo_ncep_global)\\nfig,ax = plt.subplots()\\nax.plot(lat_ncep, albedo_ncep)\\nax.grid();\\nax.set_xlabel('Latitude')\\nax.set_ylabel('Albedo');\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"All of the important sigmas are contained in our guesses.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nmarginal_sigmas = LeastSquaresHypos.Marginal(2)\\nthinkplot.hist(marginal_sigmas)\\nthinkplot.Config(xlabel='sigma', ylabel='probability')\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"A gym environment contains all relevant data describing the problem. We can directly inspect the action space and the observation space of the environment. In case of mountaincar, we have a discrete action space with 3 actions and a continuous observation space with 2 variables: position $\\\\in$ [-1.2,0.6] and velocity $\\\\in$ [-0.07 and 0.07]\\n\",\"targets\":\"print env.action_space\\nprint env.observation_space\\nprint env.observation_space.low\\nprint env.observation_space.high\\nprint env.goal_position\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"And here's an example using the parameters in the question.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\npmf = MakePmfUsingBinom(5, 0.1, 200)\\nthinkplot.Pdf(pmf)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Add dropout layer by hand to an MLP\\n\",\"targets\":\"def dropout_layer(X, dropout):\\n assert 0 <= dropout <= 1\\n # In this case, all elements are dropped out\\n if dropout == 1:\\n return torch.zeros_like(X)\\n # In this case, all elements are kept\\n if dropout == 0:\\n return X\\n mask = (torch.Tensor(X.shape).uniform_(0, 1) > dropout).float()\\n return mask * X \\/ (1.0 - dropout)\\n\\n# quick test\\ntorch.manual_seed(0)\\nX = torch.arange(16, dtype=torch.float32).reshape((2, 8))\\nprint(X)\\nprint(dropout_layer(X, 0.0))\\nprint(dropout_layer(X, 0.5))\\nprint(dropout_layer(X, 1.0))\\n\\n# A common trend is to set a lower dropout probability closer to the input layer\\nclass Net(nn.Module):\\n def __init__(\\n self, num_inputs, num_outputs, num_hiddens1, num_hiddens2, is_training=True, dropout1=0.2, dropout2=0.5\\n ):\\n super(Net, self).__init__()\\n self.dropout1 = dropout1\\n self.dropout2 = dropout2\\n self.num_inputs = num_inputs\\n self.training = is_training\\n self.lin1 = nn.Linear(num_inputs, num_hiddens1)\\n self.lin2 = nn.Linear(num_hiddens1, num_hiddens2)\\n self.lin3 = nn.Linear(num_hiddens2, num_outputs)\\n self.relu = nn.ReLU()\\n\\n def forward(self, X):\\n H1 = self.relu(self.lin1(X.reshape((-1, self.num_inputs))))\\n # Use dropout only when training the model\\n if self.training == True:\\n # Add a dropout layer after the first fully connected layer\\n H1 = dropout_layer(H1, self.dropout1)\\n H2 = self.relu(self.lin2(H1))\\n if self.training == True:\\n # Add a dropout layer after the second fully connected layer\\n H2 = dropout_layer(H2, self.dropout2)\\n out = self.lin3(H2)\\n return out\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"#@title Installs\\n!pip install --upgrade tensorflow-model-remediation\\n!pip install --upgrade fairness-indicators\\n\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nModel Remediation Case Study\\n
    \\n
    \\n View on TensorFlow.org<\\/a>\\n<\\/td>\\n\\n \\n Run in Google Colab<\\/a>\\n<\\/td>\\n\\n \\n View source on GitHub<\\/a>\\n<\\/td>\\n\\n Download notebook<\\/a>\\n<\\/td>\\n<\\/table><\\/div>\\n\\nIn this notebook, we\u2019ll train a text classifier to identify written content that could be considered toxic or harmful, and apply MinDiff to remediate some fairness concerns. In our workflow, we will:\\n1. Evaluate our baseline model\u2019s performance on text containing references to sensitive groups. \\n2. Improve performance on any underperforming groups by training with MinDiff. \\n3. Evaluate the new model\u2019s performance on our chosen metric.\\nOur purpose is to demonstrate usage of the MinDiff technique with a very minimal workflow, not to lay out a principled approach to fairness in machine learning. As such, our evaluation will only focus on one sensitive category and a single metric. We also don\u2019t address potential shortcomings in the dataset, nor tune our configurations. In a production setting, you would want to approach each of these with rigor. For more information on evaluating for fairness, see this guide.\\nSetup\\nWe...\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Array indexing\\nNumpy offers several ways to index into arrays.\\nSlicing: Similar to Python lists, numpy arrays can be sliced. Since arrays may be multidimensional, you must specify a slice for each dimension of the array:\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nimport numpy as np\\n\\n# Create the following rank 2 array with shape (3, 4)\\n# [[ 1 2 3 4]\\n# [ 5 6 7 8]\\n# [ 9 10 11 12]]\\na = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])\\n\\n# Use slicing to pull out the subarray consisting of the first 2 rows\\n# and columns 1 and 2; b is the following array of shape (2, 2):\\n# [[2 3]\\n# [6 7]]\\nb = a[:2, 1:3]\\nprint(b)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Agora, vamos calcular a transformada de Fourier $F(f)$ da imagem e $F(h)$ do kernel. Antes de mais nada, precisamos garantir que a imagem $f$ e o kernel $h$ sejam peri\u00f3dicos e tenham o mesmo tamanho.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n# Aumentando h para o tamanho de f\\naux = np.zeros(f.shape)\\nr,c = h.shape\\naux[:r,:c] = h\\n\\n# Calculando a Transformada de Fourier da f e h\\nF = fft2(f)\\nH = fft2(aux)\\n\\n# Multiplicando-se as Tranformadas\\nG = F * H\\n\\n# Calculando a Transformada inversa\\ngg = ifft2(G)\\n\\nprint(\\\"Result gg: \\\\n\\\",np.around(gg))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"All of these distributions have the same area under the curve, so they represent the same total number of units sold. Smaller values of $n$ give flatter distributions (less head, more tail) and larger values of $n$ give more head-heavy distributions.\\nWhat is the total value in the head versus the tail? Define the head to be the 10% of styles with the largest sales volume, the tail to be the 50% of styles with the lowest sales volumes, and the middle to be those in between.\\nThat is, the head, tail, and middle look like this:\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n# Same plot as above\\nfig, ax = plt.subplots(figsize=(10, 6))\\n\\nfor nn in ns:\\n norm = (nn-1)*volume\\/(1-inventory**(1-nn))\\n ax.plot(rr, norm\\/rr**nn, label='$n=%g$' % nn)\\n\\nax.set_xlabel('Rank by Sales Volume $r$')\\nax.set_ylabel('Units Sold')\\nax.set_title('Sales volume of each product by rank')\\nax.set_ylim(0,100)\\n\\n# Ask seaborn for some pleasing colors\\nc1, c2, c3 = sns.color_palette(n_colors=3)\\n\\n# Add transparent rectangles\\nhead_patch = plt.matplotlib.patches.Rectangle((1,0), 9, 100, alpha=0.25, color=c1)\\nmiddle_patch = plt.matplotlib.patches.Rectangle((11,0), 39, 100, alpha=0.25, color=c2)\\ntail_patch = plt.matplotlib.patches.Rectangle((51,0), 48, 100, alpha=0.25, color=c3)\\nax.add_patch(head_patch)\\nax.add_patch(middle_patch)\\nax.add_patch(tail_patch)\\n\\n# Add text annotations\\nax.text(5,50,\\\"Head\\\", color=c1, fontsize=16, rotation=90)\\nax.text(25,80,\\\"Middle\\\", color=c2, fontsize=16)\\nax.text(75,80,\\\"Tail\\\", color=c3, fontsize=16)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"#\\n# Network model can be initialized using following syntax in the constructor itself\\n#\\nmodel1 = Sequential([\\n Dense(32,input_dim=784),\\n Activation(\\\"relu\\\"),\\n Dense(10),\\n Activation(\\\"softmax\\\")\\n])\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nEither define entire neural network inside the constructor of the Sequential class as below,\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Lectures\\/Lecture2-Jupyter_and_python\\/Lecture-2-Introduction-to-Python-Programming 2017.ipynb\\\".\\nThe first task is:\\nbuilt in magic commands start with\\nA good list of the commands are found in:\\nhttps:\\/\\/ipython.org\\/ipython-doc\\/3\\/interactive\\/magics.html\\nCan you write Python code for it?\\n\",\"targets\":\"\\n%%sh \\ncat ..\\/..\\/Scripts\\/hello-world.py\\n\\n!python ..\\\\..\\\\Scripts\\\\hello-world.py\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Combining DataFrames\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\ndf1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'],\\n 'B': ['B0', 'B1', 'B2', 'B3'],\\n 'C': ['C0', 'C1', 'C2', 'C3'],\\n 'D': ['D0', 'D1', 'D2', 'D3']},\\n index=[0, 1, 2, 3])\\ndf2 = pd.DataFrame({'A': ['A4', 'A5', 'A6', 'A7'],\\n 'B': ['B4', 'B5', 'B6', 'B7'],\\n 'C': ['C4', 'C5', 'C6', 'C7'],\\n 'D': ['D4', 'D5', 'D6', 'D7']},\\n index=[4, 5, 6, 7]) \\ndf3 = pd.DataFrame({'A': ['A8', 'A9', 'A10', 'A11'],\\n 'B': ['B8', 'B9', 'B10', 'B11'],\\n 'C': ['C8', 'C9', 'C10', 'C11'],\\n 'D': ['D8', 'D9', 'D10', 'D11']},\\n index=[8, 9, 10, 11])\\n\\n# Combine along the rows\\npd.concat([df1, df2, df3])\\n\\n# Combine along the columns\\n# Note that Pandas assigns cell values that does not align correct to NaN\\npd.concat([df1, df2, df3], axis=1)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"
    \\n EXERCISE<\\/b>:\\n
    Figure 3: Emojifier-V2. A 2-layer LSTM sequence classifier. <\\/center><\\/caption>\\n2.2 Keras and mini-batching\\nIn this exercise, we want to train Keras using mini-batches. However, most deep learning frameworks require that all sequences in the same mini-batch have the same length. This is what allows vectorization to work: If you had a 3-word sentence and a 4-word sentence, then the computations needed for them are different (one takes 3 steps of an LSTM, one takes 4 steps) so it's just not possible to do them both at the same time.\\nThe common solution to this is to use padding. Specifically, set a maximum sequence length, and pad all sequences to the same length. For example, of the maximum sequence length is 20, we could pad every sentence with \\\"0\\\"s so that each input sentence is of length 20. Thus, a sentence \\\"i love you\\\" would be represented as $(e_{i}, e_{love}, e_{you}, \\\\vec{0}, \\\\vec{0}, \\\\ldots, \\\\vec{0})$. In this example, any sentences longer than 20 words would have to be truncated. One simple way to choose the maximum sequence length is to just pick the length of the longest sentence in the training set. \\n2.3 - The Embedding layer\\nIn Keras, the embedding matrix is represented as a \\\"layer\\\", and maps positive integers (indices corresponding to words) into dense vectors of fixed size (the embedding vectors). It can be trained or initialized with a pretrained embedding. In this part, you will learn how to create an Embedding() layer in Keras, initialize it with the GloVe 50-dimensional vectors loaded earlier in the notebook. Because our training set is quite small, we will not update the word embeddings but will instead leave their values fixed. But in the code below, we'll show you how Keras allows you to either train or leave fixed this layer. \\nThe Embedding() layer takes an integer matrix of size (batch size, max input length) as input....\\n\",\"targets\":\"# GRADED FUNCTION: sentences_to_indices\\n\\ndef sentences_to_indices(X, word_to_index, max_len):\\n \\\"\\\"\\\"\\n Converts an array of sentences (strings) into an array of indices corresponding to words in the sentences.\\n The output shape should be such that it can be given to `Embedding()` (described in Figure 4). \\n \\n Arguments:\\n X -- array of sentences (strings), of shape (m, 1)\\n word_to_index -- a dictionary containing the each word mapped to its index\\n max_len -- maximum number of words in a sentence. You can assume every sentence in X is no longer than this. \\n \\n Returns:\\n X_indices -- array of indices corresponding to words in the sentences from X, of shape (m, max_len)\\n \\\"\\\"\\\"\\n \\n m = X.shape[0] # number of training examples\\n \\n ### START CODE HERE ###\\n # Initialize X_indices as a numpy matrix of zeros and the correct shape (\u2248 1 line)\\n X_indices = np.zeros((m, max_len))\\n \\n for i in range(m): # loop over training examples\\n \\n # Convert the ith training sentence in lower case and split is into words. You should get a list of words.\\n sentence_words = [w.lower() for w in X[i].split()]\\n \\n # Initialize j to 0\\n j = 0\\n \\n # Loop over the words of sentence_words\\n for w in sentence_words:\\n # Set the (i,j)th entry of X_indices to the index of the correct word.\\n X_indices[i, j] = word_to_index[w]\\n # Increment j to j + 1\\n j += 1\\n \\n ### END CODE HERE ###\\n \\n return X_indices\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# The following exanples uses an HiKey board\\nADB_DEVICE = '607A87C400055E6E'\\n\\n# logging.getLogger().setLevel(logging.DEBUG)\\n\\n# Unified configuration dictionary\\nmy_conf = {\\n \\n # Tools required\\n \\\"tools\\\" : ['rt-app', 'trace-cmd'],\\n \\n # RTApp calibration\\n #\\\"modules\\\" : ['cpufreq'],\\n \\\"rtapp-calib\\\" : {\\n \\\"0\\\": 254, \\\"1\\\": 252, \\\"2\\\": 252, \\\"3\\\": 251,\\n \\\"4\\\": 251, \\\"5\\\": 252, \\\"6\\\": 251, \\\"7\\\": 251\\n },\\n \\n # FTrace configuration\\n \\\"ftrace\\\" : {\\n \\n # Events to trace\\n \\\"events\\\" : [\\n \\\"sched_switch\\\",\\n \\\"sched_wakeup\\\",\\n \\\"sched_wakeup_new\\\",\\n \\\"sched_wakeup_tracking\\\", \\n \\\"sched_stat_wait\\\",\\n \\\"sched_overutilized\\\",\\n \\\"sched_contrib_scale_f\\\",\\n \\\"sched_load_avg_cpu\\\",\\n \\\"sched_load_avg_task\\\",\\n \\\"sched_tune_config\\\",\\n \\\"sched_tune_filter\\\",\\n \\\"sched_tune_tasks_update\\\",\\n \\\"sched_tune_boostgroup_update\\\",\\n \\\"sched_boost_cpu\\\",\\n \\\"sched_boost_task\\\",\\n \\\"sched_energy_diff\\\",\\n \\\"cpu_capacity\\\",\\n \\\"cpu_frequency\\\",\\n \\\"cpu_idle\\\",\\n \\\"walt_update_task_ravg\\\",\\n \\\"walt_update_history\\\",\\n \\\"walt_migration_update_sum\\\",\\n ],\\n \\n# # Kernel functions to profile\\n# \\\"functions\\\" : [\\n# \\\"pick_next_task_fair\\\",\\n# \\\"select_task_rq_fair\\\",\\n# \\\"enqueue_task_fair\\\",\\n# \\\"update_curr_fair\\\",\\n# \\\"dequeue_task_fair\\\",\\n# ],\\n \\n # Per-CPU buffer configuration\\n \\\"buffsize\\\" : 10 * 1024,\\n },\\n \\n # Target platform\\n \\\"platform\\\" : 'android',\\n \\\"board\\\" : 'hikey',\\n \\\"device\\\" : ADB_DEVICE,\\n \\\"results_dir\\\" : \\\"ReleaseNotes_v16.09\\\",\\n \\\"ANDROID_HOME\\\" : \\\"\\/opt\\/android-sdk-linux\\\",\\n \\\"CATAPULT_HOME\\\" : \\\"\\/home\\/derkling\\/Code\\/catapult\\\",\\n}\\n\\nfrom env import TestEnv\\n\\nte = TestEnv(my_conf, force_new=True)\\ntarget = te.target\\n\\nfrom wlgen import RTA,Ramp\\n\\n# Let's run a...\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nImproved Trace Analysis support\\nThe Trace module is a wrapper around the TRAPpy library which has been\\nupdated to:\\n\\nsupport parsing of systrace file format
    \\n requires catapult locally installed
    \\n https:\\/\\/github.com\\/catapult-project\\/catapult\\nparsing and DataFrame generation for custom events\\n\\nCreate an example trace\\nNOTE: the cells in this sections are required just to create\\n a trace file to be used by the following sections\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"9. Key Properties --> Conservation --> Salt\\nGlobal salt convervation properties of the model\\n9.1. Ocean Seaice Interface\\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\\nDescribe if\\/how salt is conserved at the ocean\\/sea-ice coupling interface\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n# PROPERTY ID - DO NOT EDIT ! \\nDOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') \\n\\n# PROPERTY VALUE: \\n# Set as follows: DOC.set_value(\\\"value\\\") \\n# TODO - please enter value(s)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Examples\\/IPNotebooks\\/01 Notes on ipzCaptureWindow functions.ipynb\\\".\\nThe first task is:\\nPerform a quick-focus\\nCan you write Python code for it?\\n\",\"targets\":\"\\nl.zQuickFocus()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Circular Spin Chain Model Implementation\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\np1 = CircularSpinChain(N, correct_global_phase=True)\\n\\nU_list = p1.run(qc)\\n\\nU_physical = gate_sequence_product(U_list)\\n\\nU_physical.tidyup(atol=1e-5)\\n\\n(U_ideal - U_physical).norm()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"#simple_curve.data\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nThe textual representation indicates that this object is a continuous mapping from x to y, which is how HoloViews knew to render it as a continuous curve. You can also access the full original data if you wish:\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"GSOC\\/notebooks\\/Projects\\/GRASS\\/python-grass-addons\\/04_script_to_grass_module.ipynb\\\".\\nThe first task is:\\nThe construction if __name__ == '__main__': is a standard way in Python of marking code which should be executed when the file is used as a script. It is a best practice to use the above mentioned template. \\nGRASS GIS parser\\nEvery GRASS GIS module must use the GRASS parser mechanism. This very advanced parser helps to check the user interput, format the help text and optionally create a graphical user interface for the new module.\\nIn Python, this means calling the parser() function from grass.script package. This function parses the special comments written at the beginning of the Python file (below the 'shebang'), processes the parameters provided in command line when using the module and provides these data accordingly within the module. These special comments start with #% and can be referred to as interface definition comments.\\nMinimal template\\nThe interface definition comment should contain at least the description of the module and two keywords as shown below. Existing GRASS GIS Python scripts may help to understand the best practice. These values are defined in section module which contains the description and keyword keys.\\nCan you write Python code for it?\\n\",\"targets\":\"\\n%%file r.viewshed.points.py\\n#!\\/usr\\/bin\\/env python\\n\\n#%module\\n#% description: Compute and analyze viewsheds\\n#% keyword: raster\\n#% keyword: viewshed\\n#%end\\n\\nimport grass.script as gscript\\n\\ndef main():\\n gscript.parser()\\n\\nif __name__ == '__main__':\\n main()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Can't use because no test data for comparison\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nfrom nilmtk.metrics import f1_score\\nf1_hart= f1_score(disag_hart_elec, test_elec)\\nf1_hart.index = disag_hart_elec.get_labels(f1_hart.index)\\nf1_hart.plot(kind='barh')\\nplt.ylabel('appliance');\\nplt.xlabel('f-score');\\nplt.title(\\\"Hart\\\");\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Data_analysis\\/SNP-indel-calling\\/dadi\\/MODIFIED_SPECTRA\\/01_modified_1D_models.ipynb\\\".\\nThe first task is:\\nThis looks like convergence.\\nInterpretation\\nCan you write Python code for it?\\n\",\"targets\":\"\\npopt = np.array( df.sort_values(by='-logL', ascending=True).iloc[0, 3:6] )\\npopt\\n\\n# calculate best-fit model spectrum\\n\\nmodel_spectrum = func_ex(popt, ns, pts_l)\\n\\ntheta = dadi.Inference.optimal_sfs_scaling(model_spectrum, fs_ery)\\n\\nmu = 3e-9\\nL = fs_ery.data.sum()\\nprint \\\"The optimal value of theta per site for the ancestral population is {0:.4f}.\\\".format(theta\\/L)\\n\\nNref = theta\\/L\\/mu\\/4\\nNref\\n\\nprint \\\"At time {0:,} generations ago, the ERY population size instantaneously increased by almost 55-fold (to {1:,}).\\\".format(int(popt[2]*2*Nref), int(popt[0]*Nref))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"2. D\u00e9codage de Lempel-Ziv-Welch\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\ndef decodage_LZW(code_lu: typing.List[int]) -> str:\\n # Mot produit en sortie, vide au d\u00e9but\\n mot_lu = \\\"\\\"\\n # Dictionnaire mappant les entiers \u00e0 la cha\u00eene associ\u00e9e\\n dictionnaire = { i: chr(i) for i in range(255) }\\n # On saute la valeur 256\\n prochain_code = 257\\n # Mot actuel, vide au d\u00e9but\\n mot_actuel = \\\"\\\"\\n \\n m = len(code_lu)\\n # Si le code lu est vide, on renvoie un texte vide\\n if m == 0: return \\\"\\\"\\n \\n # On lit le premier entier du code lu\\n code_actuel = code_lu[0]\\n # Il est forc\u00e9ment d\u00e9j\u00e0 dans le dictionnaire, on le d\u00e9code directement\\n mot_actuel = dictionnaire[code_actuel]\\n # Et on l'\u00e9crit\\n mot_lu += mot_actuel\\n # On commence avec w = mot_actuel d'une lettre\\n w = mot_actuel\\n\\n for i in range(1, m):\\n # On lit le i\u00e8me code suivant\\n code_actuel = code_lu[i]\\n # S'il est d\u00e9j\u00e0 dans le dictionnaire, le mot actuel correspond au code donn\u00e9 par le dict.\\n if code_actuel in dictionnaire:\\n mot_actuel = dictionnaire[code_actuel]\\n print(\\\"Code lu = {}, mot correspondant = {}.\\\".format(\\n code_actuel, mot_actuel))\\n # Sinon c'est n\u00e9cessaire la prochaine valeur non encore ajout\u00e9e dans le dictionnaire\\n elif code_actuel == prochain_code:\\n # le mot actuel est le mot + la lettre du d\u00e9but de ce mot\\n mot_actuel = w + w[0]\\n print(\\\"Code lu = {}, mot correspondant = {}.\\\".format(\\n code_actuel, mot_actuel))\\n else:\\n raise ValueError(\\\"Mauvaise compression code_actuel = {}\\\".format(code_actuel))\\n # On \u00e9crit ce mot actuel\\n mot_lu += mot_actuel\\n print(\\\"Ajout de {} au dictionnaire, de valeur = '{}'\\\".format(\\n prochain_code, w + mot_actuel[0]))\\n # On ajoute dans le dictionnaire w + mot_actuel[0]\\n dictionnaire[prochain_code] = w + mot_actuel[0]\\n prochain_code += 1\\n # Et le w suivant est mot_actuel\\n w = mot_actuel\\n return mot_lu\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"05\\/.ipynb_checkpoints\\/06-classwork-checkpoint.ipynb\\\".\\nThe first task is:\\n3) Use a for loop to determine who BESIDES Lil Wayne has the highest popularity rating. Is it the same artist who has the largest number of followers?\\nCan you write Python code for it?\\n\",\"targets\":\"\\nmost_popular_name = \\\"\\\"\\nmost_popular_score = 0\\nfor artist in artists:\\n print(\\\"Looking at\\\", artist['name'], \\\"who has a popularity score of\\\", artist['popularity'])\\n # THE CONDITIONAL\\n # a.k.a. what you are testing\\n print(\\\"Comparing\\\", artist['popularity'], \\\"to\\\", most_popular_score)\\n # Seeing if the artist is more popular that what we've saved\\n if artist['popularity'] > most_popular_score:\\n print(\\\"FOUND A NEW MOST POPULAR, checking to see if it's Lil Wayne trying to sneak through\\\")\\n # Seeing if it's actually Lil Wayne\\n if artist['name'] == \\\"Lil Wayne\\\":\\n print(\\\"Nice try Lil Wayne, we don't care\\\")\\n else:\\n # THE CHANGE\\n # a.k.a. what you're keeping track of\\n print(\\\"Not Lil Wayne, updating our notebook\\\")\\n most_popular_name = artist['name']\\n most_popular_score = artist['popularity']\\nprint(\\\"#########\\\")\\nprint(most_popular_name, most_popular_score)\\n\\n\\nname = \\\"Lil Soma\\\"\\n\\ntarget_score = 72\\n# 1: INITIAL CONDITION\\n# We have no one who is in our list yet\\nsecond_best_artists = []\\n\\n# AGGREGATION PROBLEM\\n# When you're looping through a series of serious objects\\n# and sometimes you want to add one of those objects\\n# to a DIFFERENT list\\nfor artist in artists:\\n # Let's print out what's on our notebook\\n print(\\\"Notebook is\\\", second_best_artists)\\n print(\\\"Looking at\\\", artist['name'], \\\"who has a popularity of\\\", artist['popularity'])\\n # 2: CONDITIONAL\\n # when we want to add someone to our list\\n if artist['popularity'] == 72:\\n print(\\\"!!!!!!!! The artist's popularity is 72\\\")\\n # 3: THE CHANGE\\n # Add that artist to our list\\n # .append(newthing) is how we do that in Python\\n second_best_artists.append(artist['name'])\\n\\nprint(\\\"OUR SECOND BEST ARTISTS ARE:\\\")\\nfor artist in second_best_artists:\\n print(artist)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Application of this rule will merge nodes bad_circle and good_circle in the graph g2. It with then add a new node and connect it with an edge to the merged node. Let us specify some typings of the new node in the RHS: we will set the new node to be typed as circle in the graph shapes.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nrhs_typing = {\\n \\\"shapes\\\": {\\n \\\"c\\\": \\\"circle\\\"\\n }\\n}\\n\\nrhs_instance = hierarchy.rewrite(\\\"g2\\\", rule, instance, rhs_typing=rhs_typing)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Project_2.ipynb\\\".\\nThe first task is:\\nHere I want to add the columns that the DataFrame needs so i do index and list comprehension for each particular one.\\nCan you write Python code for it?\\n\",\"targets\":\"\\ngames_df['Ratings'] = [rate for rate in ratings]\\n\\ngames_df['Total_Points'] = [n for n in total_points]\\n\\ngames_df['Player_ID'] = [n for n in player_number]\\n\\ngames_df['State'] = [state for state in player_state]\\n\\ngames_df['Player_Names'] = [player for player in player_names]\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"311 Combining CSV Datasets, Parsing by Week.ipynb\\\".\\nThe first task is:\\nConverting Date to date format and creating a new column for week numbers based on that\\nCan you write Python code for it?\\n\",\"targets\":\"\\ndf['Created Date'] = pd.to_datetime(df['Created Date'], errors='coerce')\\ndf.sort_values('Created Date', inplace = True)\\ndf['Week Number'] = df['Created Date'].dt.week\\ndf\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# \u5b66\u7fd2\u7d50\u679c\u306e\u30c7\u30fc\u30bf\u30d5\u30ec\u30fc\u30e0\u3092\u4f5c\u6210\\ndf_normal = pd.DataFrame(data_normal)\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\n\u7d50\u679c\u306e\u53ef\u8996\u5316\\n\u6b21\u306b\u7d50\u679c\u306e\u53ef\u8996\u5316\u3067\u3059\u3002\u5b66\u7fd2\u7d50\u679c data_normal \u3092 pandas \u306e\u30c7\u30fc\u30bf\u30d5\u30ec\u30fc\u30e0\u306b\u5909\u63db\u3057\u3001\u30c7\u30fc\u30bf\u30d5\u30ec\u30fc\u30e0\u306e\u6a5f\u80fd\u3092\u4f7f\u3063\u3066\u30b0\u30e9\u30d5\u3092\u63cf\u753b\u3057\u307e\u3059\u3002\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Labs\\/Lab10\\/Lab10.ipynb\\\".\\nThe first task is:\\n(2): Open the resulting image, std114637_Vstack_error.fit, in ds9. What do you notice about the image? Describe quantitatively.\\nAnswer:\\nCan you write Python code for it?\\n\",\"targets\":\"\\n# Extract the star positions. Replace ?? with values for the extraction parameters that capture the stars of interest.\\nstd_V_xpos, std_V_ypos = starExtractor(\\\"std114637_V_stack.fit\\\", nsigma_value=??, fwhm_value=??)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Note-3 Tensor Ridge Regression\\/global dimensionality reduction (GDR) algorithm Ver 1.0.ipynb\\\".\\nThe first task is:\\n\u5faa\u73aftucker\u5206\u89e3\\n\u8ba1\u5212\u8fc1\u79fbPyTorch\u6784\u5efa\u6279\u6b21\u5904\u7406\\nCan you write Python code for it?\\n\",\"targets\":\"\\nfactors_tensor = tensorly.tensor(factors_tensor)\\n\\ncore_list = []\\nmode_factors_list = []\\nfor i in range(factors_tensor.shape[0]):\\n print (i)\\n core, mode_factors= tucker(factors_tensor[i])\\n core = np.expand_dims(core, axis=0)\\n core_list.append(core)\\n mode_factors_list.append(mode_factors)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Build Nearest Neighbor Embedding Index\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nEMBEDDING_LENGTH = 512\\nNN_INDEX_FILENAME = \\\"reviews.ann\\\"\\n\\nos.environ[\\\"TFHUB_CACHE_DIR\\\"] = \\\"tf_cache_dir\\\"\\n\\n# define graph\\ng = tf.Graph()\\nwith g.as_default():\\n text_input = tf.placeholder(dtype=tf.string, shape=[None])\\n encoder = hub.Module(\\\"https:\\/\\/tfhub.dev\\/google\\/universal-sentence-encoder-large\\/3\\\")\\n embed = encoder(text_input)\\n init_op = tf.group([tf.global_variables_initializer(), tf.tables_initializer()])\\ng.finalize()\\n\\n# initialize session\\nsess = tf.Session(graph=g)\\nsess.run(init_op)\\n\\n# build and save embedding index\\nt = annoy.AnnoyIndex(EMBEDDING_LENGTH, 'angular') # Length of item vector that will be indexed\\nfor i, review in enumerate(reviews):\\n # produce embedding with TF\\n embedding = sess.run(embed, feed_dict={text_input: [review]})\\n t.add_item(i, embedding[0])\\nt.build(10) # 10 trees\\nt.save(NN_INDEX_FILENAME)\\n\\nrun.log_artifact(\\\"nn_index\\\", open(NN_INDEX_FILENAME, 'rb'))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Dato-tutorials\\/pattern-mining\\/Pattern Mining - Association Rule Mining.ipynb\\\".\\nThe first task is:\\nOnce the features are extracted, we can use them downstream in other applications such as clustering, classification, churn prediction, recommender systems etc.\\nCan you write Python code for it?\\n\",\"targets\":\"\\nfeatures.print_rows(num_rows=10, max_row_width=90, max_column_width=100)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"3 Quadratic Discriminant Analysis\\nUnlike the LDA, the QDA assumes that each class has its own convariance matrix. That is, it assume that an obserivation from the $k$th class is of the form $X \\\\sim N(\\\\mu_k, \\\\Sigma_k)$, where $\\\\Sigma_k$ is a convariance matrix for $k$th class. Under this assumption, the Bayes classifier assigns an observation $X=x$ to the class for which\\n$$\\n\\\\delta_k=-\\\\frac{1}{2}(x-\\\\mu_k)^T\\\\Sigma_k^{-1}(x-\\\\mu_k) - \\\\frac{1}{2}log|\\\\Sigma_k|+log\\\\pi_k\\n$$\\nis largest. The quantity $x$ appears as a quadratic function. That is where QDA gets its name.\\n3.1 Comparsion of LDA and QDA\\nLDA has the bettter variance but suffers from high bias, QDA is vice-versa.\\n\",\"targets\":\"from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis\\nqda = QuadraticDiscriminantAnalysis()\\nqda.fit(X,y)\\npred = qda.predict(X)\\nprint(1-accuracy_score(pred, y))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"

    Functionalize this<\\/h4>\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\ndef get_snippets(response):\\n all_snippets = list()\\n for business in response['businesses']:\\n name = business['name']\\n snippet = business['snippet_text']\\n id = business['id']\\n all_snippets.append((id,name,snippet))\\n return all_snippets\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"-------------- Naive Baseline ---------------\\nOur Naive Baseline for any user i, item j prediction is to assign it with average rating over entire dataset. (amean))

    \\n\",\"targets\":\"def predict_naive(user, item):\\n return amean1\\n\\nx1, x2 = train_test_split(nmat, test_size = 0.2, random_state =42)\\nnaive = np.zeros((len(rows),len(cols)))\\nfor row in x1:\\n naive[row[0], row[1]] = row[2]\\n\\npredictions = []\\ntargets = []\\n\\namean1 = np.mean(naive[naive!=0])\\numean1 = sum(naive.T) \\/ sum((naive!=0).T)\\nimean1 = sum(naive) \\/ sum((naive!=0))\\n\\numean1 = np.where(np.isnan(umean1), amean1, umean1)\\nimean1 = np.where(np.isnan(imean1), amean1, imean1)\\n\\n\\nprint('Naive---')\\nfor row in x2:\\n user, item, actual = row[0], row[1], row[2]\\n predictions.append(predict_naive(user, item))\\n targets.append(actual)\\n\\nprint('rmse %.4f' % rmse(np.array(predictions), np.array(targets)))\\nprint('mae %.4f' % mae(np.array(predictions), np.array(targets)))\\nprint()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"


    \\n\\nIndexing using where():<\\/p>\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nx_1 = np.array([1,2,3,4,5])\\n\\ny_1 = np.array([11,22,33,44,55])\\n\\nfilter = np.array([True, False, True, False, True])\\n\\nout = np.where(filter, x_1, y_1)\\nprint(out)\\n\\nmat = np.random.rand(5,5)\\nmat\\n\\nnp.where( mat > 0.5, 1000, -1)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"00_getting_started.ipynb\\\".\\nThe first task is:\\nDone! OK, it doesn't look that great. That's because we are using only 10 sampling points. Try changing the number of points generated by np.linspace from 10 to another, higher value. Re-run all the cells and see what changes.\\nWe can also apply other changes. First of all, we can define a Python function to calculate $f(x)$. We can then reuse this function wherever we want instead of typing it down everytime!\\nA Python function:\\n1. Starts with the def statement\\n2. Followed by the name of the function\\n3. In parentheses, a comma-separated list of function arguments. In this case the numpy array with the $x$ points.\\n4. A colon\\n5. The body of the function\\nOur function first calculates $f(x)$ and stores the values in the y array, then returns the array.\\nCan you write Python code for it?\\n\",\"targets\":\"\\ndef my_first_function(x):\\n import numpy as np\\n y = 10 * x**3 * np.exp(-x**2) + np.sin(x**5) \\/ np.cos(x**3)\\n return y\\n\\nfunction = my_first_function(x)\\nprint(function)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Code\\/SQL\\/SQL_Intro_DBcopy.ipynb\\\".\\nThe first task is:\\nWHERE\\nSELECT \\n column_a\\nFROM\\n table_name\\nWHERE\\n column_a = x # filters the result-set to rows where column_a's value is exactly x\\n\\nA few more options for the where clause:\\nWHERE column_a = 'some_text' # put text in quotations. CAPITALIZATION IS IMPORTANT\\n\\nWHERE column_a != x # filters the result-set to rows where column_a's value DOES NOT EQUAL x\\n\\nWHERE column_a < x # filters the result-set to rows where column_a's value is less than x\\n\\nWHERE columna_a <= x # filters the result-set to rows where column_a's value is less than or equal to x\\n\\nWHERE column_a IN (x, y) # column_a's value can be EITHER x OR y\\n\\nWHERE column_a NOT IN (x, y) # column_a's value can be NEITHER x NOR y\\n\\nWHERE column_a BETWEEN x AND y # BETWEEN lets you specify a range\\n\\nWHERE column_a = x AND column_b = y # AND lets you add more filters\\n\\nWHERE column_a = x OR column_b = y # OR will include results that fulfill either criteria\\n\\nWHERE (column_a = x AND column_b = y) OR (column_c = z) # use parentheses to create complex AND\\/OR statements\\n\\n\\nWHERE allows you to filter the result-set to only include rows matching specific values\\/criteria. If the value\\/criteria is text, remember to put it in single or double quotation marks\\nMore on WHERE: http:\\/\\/www.w3schools.com\\/sql\\/sql_where.asp\\n\\nBelow, WHERE filters out any rows that don't match the criteria. The result-set will only contain rows where the payment type is cash AND where the model_id is 46:\\nCan you write Python code for it?\\n\",\"targets\":\"\\nrun('''\\n SELECT\\n *\\n FROM\\n sales_table\\n WHERE\\n payment_type = 'cash'\\n AND model_id = 46 \\n ''')\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"%%time\\nquotes = loop.run_until_complete(quote_many(2000, conn_limit=100, step=20))\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nFor fast response, progress bar introduces considerable latency. Try modify the step higher.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"03_Data_and_Expressions_Lecture.ipynb\\\".\\nThe first task is:\\nVariable can be assigned different values during a program's execution. \\nVariables are assigned values by use of the assignment operator (=)\\nCan you write Python code for it?\\n\",\"targets\":\"\\n# Update Variable\\nnum = num + 1 # num += 1 \\n\\nnum = (\\\"Nazugum\\\")\\n\\ntype(num)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"After restarting the cluster, it is important to be EXTREMELY careful to load all required variables and modules on all engines again.\\nrestart\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n%who\\n\\n! rm -f OUT_exp_growth_model\\/PAR_[0123456789]*\\n\\nfrom ipyparallel import Client\\n\\ncl = Client()\\n\\ncl.ids\\n\\n%%px --local\\n\\n# run whole cell on all engines a well as in the local IPython session\\n\\nimport numpy as np\\n\\nimport sys\\n\\nsys.path.insert(0, '\\/home\\/claudius\\/Downloads\\/dadi')\\n\\nimport dadi\\n\\n%%px --local\\n\\n# import 1D spectrum of ery on all engines:\\nfs_ery = dadi.Spectrum.from_file('ERY.unfolded.sfs').fold()\\n\\n# import 1D spectrum of ery on all engines:\\nfs_par = dadi.Spectrum.from_file('PAR.unfolded.sfs').fold()\\n\\n%%px --local\\n\\nns = fs_ery.sample_sizes # both populations have the same sample size\\n\\nfs_ery.pop_ids = ['ery']\\nfs_par.pop_ids = ['par']\\n\\n# setting the smallest grid size slightly larger than the largest population sample size (36)\\npts_l = [40, 50, 60]\\n\\n%%px --local\\n\\n# create link to function that specifies the model\\nfunc = dadi.Demographics1D.growth\\n\\n# create extrapolating version of the model function\\nfunc_ex = dadi.Numerics.make_extrap_log_func(func)\\n\\n# create load balanced view of engines\\n\\nlbview = cl.load_balanced_view()\\n\\n%%px --local\\n\\nimport dill # better version of pickle\\n\\nfrom itertools import product\\n\\ndef run_dadi(p_init): # for the function to be called with map, it needs to have one input variable\\n \\\"\\\"\\\"\\n p_init: initial parameter values to run optimisation from\\n \\\"\\\"\\\"\\n if perturb == True:\\n p_init = dadi.Misc.perturb_params(p_init, fold=fold, \\n upper_bound=upper_bound, lower_bound=lower_bound)\\n # note upper_bound and lower_bound variables are expected to be in the namespace of each engine\\n # run optimisation of paramters\\n popt = dadi_opt_func(p0=p_init, data=sfs, model_func=func_ex, pts=pts_l, \\\\\\n lower_bound=lower_bound, upper_bound=upper_bound, \\\\\\n verbose=verbose, maxiter=maxiter, full_output=full_output)\\n import dill\\n # pickle to file\\n name = outname[:] # make copy of file name stub!\\n for p in p_init:\\n name += \\\"_%.4f\\\" % (p)\\n with open(name...\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"As we can see, summarizing the exclusions is very simple because we have created this cohort table. With that, we conclude this tutorial on cohort selection. To recap, you have learned that:\\n\\nbest practice is to create a \\\"cohort\\\" table with a single row for every unique icustay_id, which is usually the identifier of interest\\nexclusions flags can be created based off rules, allowing easy prototyping, modification, and summarization later\\nwhen identifying the type of care provided, use the services table\\nread the docs, and don't make assumptions!\\n\\nAlso, remember that when prototyping the LIMIT clause is very useful for speed gains, but don't forget to remove it once you want to test your code on all 60,000+ admissions :)\\nGood luck in your analysis!\\n\",\"targets\":\"# close out the database connection\\ncon.close()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"C5.Classification_NN\\/NeuralNetworks_professor.ipynb\\\".\\nThe first task is:\\nIt is important to deactivate gradient updates after the network has been evaluated on training data, and gradients of the loss function have been computed\\nCan you write Python code for it?\\n\",\"targets\":\"\\nplt.figure(figsize=(14,5))\\nplt.subplot(1, 2, 1), plt.plot(loss_train, 'b'), plt.plot(loss_val, 'r'), plt.legend(['train', 'val']), plt.title('Cross-entropy loss')\\nplt.subplot(1, 2, 2), plt.plot(acc_train, 'b'), plt.plot(acc_val, 'r'), plt.legend(['train', 'val']), plt.title('Accuracy')\\nplt.show()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Creating a master list and generating a list of potential triplets\\nFor this notebook, we are going to create a synthetic master list covering a wavelength interval ranging from wv_ini_master to wv_end_master, containing a total of nlines_master lines. The resulting collection of master lines will be stored in a numpy array named wv_master. An auxiliary function simulate_master_table is defined to help in this step.\\n\",\"targets\":\"def simulate_master_table(my_seed, wv_ini_master, wv_end_master, nlines_master,\\n ldebug=False):\\n \\\"\\\"\\\"Generates a simulated master table of wavelengths.\\n\\n The location of the lines follows a random uniform distribution\\n between `wv_ini_master` and `wv_end_master`.\\n \\n Parameters\\n ----------\\n my_seed : int\\n Seed to re-initialize random number generation.\\n wv_ini_master : float\\n Minimum wavelength in master table.\\n wv_end_master : float\\n Maximum wavelength in master table.\\n nlines_master : int\\n Total number of lines in master table.\\n ldebug : bool\\n If True intermediate results are displayed.\\n\\n Returns\\n -------\\n wv_master : 1d numpy array, float\\n Array with wavelengths corresponding to the master table (Angstroms).\\n \\\"\\\"\\\"\\n if my_seed is not None:\\n np.random.seed(my_seed)\\n \\n if wv_end_master < wv_ini_master:\\n raise ValueError('wv_ini_master=' + str(wv_ini_master) +\\n ' must be <= wv_end_master=' + str(wv_end_master))\\n \\n wv_master = np.random.uniform(low=wv_ini_master,\\n high=wv_end_master,\\n size=nlines_master)\\n wv_master.sort() # in-place sort\\n\\n if ldebug:\\n print('>>> Master table:')\\n for val in zip(range(nlines_master), wv_master):\\n print(val)\\n pause(lpause)\\n\\n return wv_master\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"*S pomocjo substitucije izracunaj nedoloceni integral\\n$$\\\\int \\\\frac{\\\\cos(x)}{\\\\sin(x)}.$$\\n*\\n\",\"targets\":\"sympy.simplify(f(x).integrate())\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"+\uff0c\u8868\u793a\u4e00\u4e2a\u6216\u8005\u91cd\u590d\u591a\u4e2a\u5bf9\u8c61\uff0c\u5bf9\u8c61\u4e3a+\u524d\u9762\u6307\u5b9a\u7684\u6a21\u5f0f\\n\u56e0\u6b64\\\\d+\u53ef\u4ee5\u5339\u914d\u957f\u5ea6\u81f3\u5c11\u4e3a1\u7684\u4efb\u610f\u6b63\u6574\u6570\u5b57\u7b26\u3002\\n\\n2. \u57fa\u672c\u5339\u914d\u4e0e\u5b9e\u4f8b\\n\u5b57\u7b26\u6a21\u5f0f|\u5339\u914d\u6a21\u5f0f\u5185\u5bb9|\u7b49\u4ef7\u4e8e\\n----|---|--\\n[a-d]|One character of: a, b, c, d|[abcd]\\n[^a-d]|One character except: a, b, c, d|[^abcd]\\nabc\u4e28def|abc or def|\\n\\\\d|One digit|[0-9]\\n\\\\D|One non-digit|[^0-9]\\n\\\\s|One whitespace|[ \\\\t\\\\n\\\\r\\\\f\\\\v]\\n\\\\S|One non-whitespace|[^ \\\\t\\\\n\\\\r\\\\f\\\\v]\\n\\\\w|One word character|[a-zA-Z0-9_]\\n\\\\W|One non-word character|[^a-zA-Z0-9_]\\n.|Any character (except newline)|[^\\\\n]\\n\u56fa\u5b9a\u70b9\u6807\u8bb0|\u5339\u914d\u6a21\u5f0f\u5185\u5bb9\\n----|---\\n^|Start of the string\\n$|End of the string\\n\\\\b|Boundary between word and non-word characters\\n\u6570\u91cf\u8bcd|\u5339\u914d\u6a21\u5f0f\u5185\u5bb9\\n----|---\\n{5}|Match expression exactly 5 times\\n{2,5}|Match expression 2 to 5 times\\n{2,}|Match expression 2 or more times\\n{,5}|Match expression 0 to 5 times\\n*|Match expression 0 or more times\\n{,}|Match expression 0 or more times\\n?|Match expression 0 or 1 times\\n{0,1}|Match expression 0 or 1 times\\n+|Match expression 1 or more times\\n{1,}|Match expression 1 or more times\\n\u5b57\u7b26\u8f6c\u4e49|\u8f6c\u4e49\u5339\u914d\u5185\u5bb9\\n----|---\\n\\\\.|. character\\n\\\\\\\\|\\\\ character\\n\\\\| character\\n\\\\+|+ character\\n\\\\?|? character\\n\\\\{|{ character\\n\\\\)|) character\\n\\\\[|[ character\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nm = re.findall(r'\\\\d{3,4}-?\\\\d{8}', '010-66677788,02166697788, 0451-22882828')\\nm\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"%%time\\n!gcloud ai-platform local train \\\\\\n --package-path=taxifaremodel \\\\\\n --module-name=taxifaremodel.task \\\\\\n -- \\\\\\n --train_data_path=taxi-train.csv \\\\\\n --eval_data_path=taxi-valid.csv \\\\\\n --train_steps=1 \\\\\\n --output_dir=taxi_trained \\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nTrain using AI Platform Training Service (Local)\\nAI Platform Training Service comes with a local test tool (gcloud ai-platform local train) to ensure we've packaged our code directly. It's best to first run that for a few steps before trying a Cloud job. \\nThe arguments before -- \\\\ are for AI Platform Training Service\\n- package-path: speficies the location of the Python package\\n- module-name: specifies which .py file should be run within the package. task.py is our entry point so we specify that\\nThe arguments after -- \\\\ are sent to our task.py.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"#### Adjust burn and thin, both paramters of the mcmc sample function\\n\\n#### Narrow and broaden prior\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nProblems:\\n4 - Try adjusting the number of samples for burning and thinnning\\n5 - Try adjusting the prior and see how it affects the estimate\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"1\ubc88 \uc778\ub371\uc2a4 \uac12, \uc989, \ub458\uc9f8 \ubb38\uc790\\n\",\"targets\":\"a_food[1]\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"

    \\n\\n**Challenge: Plotting Spectra with Mean Function**\\n\\n Turn the script for plotting spectra and their mean above into a function. \\n\\n<\\/div>\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n### EXERCISE\\nSpecIndices = range(1000, 2000, 100)\\nSomeSpectraNorm = TinyVecsNorm[:, range(1000, 2000, 100)]\\nMuNorm = PlotSpectraAndMean(SomeSpectraNorm, Wv, 3)\\n\\nCNorm = np.cov(TinyVecsNorm)\\nplt.figure()\\nplt.imshow(CNorm)\\nplt.show()\\n\\n# PRINT OUT SOME \\\"AMPLIFIED\\\" COVARIANCE VALUES %%%\\nfor cn in range(0, 50,5):\\n w = int(Wv[cn])\\n if cn==0:\\n print(\\\" \\\", end=\\\" \\\")\\n else:\\n print('{0:5d}'.format(w), end=\\\" \\\")\\nprint('\\\\n')\\n\\n\\nfor rn in range(5, 50, 5):\\n w = int(Wv[rn])\\n print('{0:5d}'.format(w), end=\\\" \\\")\\n for cn in range(5,50,5):\\n CovVal = int(10000000*CNorm[rn, rn])\\n print('{0:5d}'.format(CovVal), end=\\\" \\\")\\n print('\\\\n')\\n\\nprint(np.shape(TinyVecs))\\nprint(NEONNir)\\nprint(NEONRed)\\nNIRVals = TinyVecs[NEONNir, range(NSamps)]\\nRedVals = TinyVecs[NEONRed, range(NSamps)]\\nNDVIVals = (NIRVals-RedVals)\\/(NIRVals+RedVals)\\nnp.shape(NDVIVals)\\nNDVIIm = np.reshape(NDVIVals,(NRows, NCols),order='F')\\n\\nprint(np.shape(NDVIIm))\\n\\nplt.figure()\\nplt.hist(NDVIVals)\\nplt.show()\\n\\nHiNDVI = NDVIIm*(NDVIIm>0.8)\\nplt.figure()\\nplt.imshow(HiNDVI)\\nplt.show()\\n\\n# plt.figure()\\n# plt.plot(nonzero(NDVIVals>0.8))\\n# plt.show()\\nVegIndices = np.nonzero(NDVIVals>0.8)\\n# print(VegIndices[0])\\nprint(np.shape(VegIndices))\\n# print(np.shape(TinyVecs))\\nVegSpectra = TinyVecs[:, VegIndices[0]]\\nprint(np.shape(VegSpectra))\\n\\nCVeg = np.cov(VegSpectra)\\nplt.figure(9)\\nplt.imshow?\\nplt.imshow(CVeg,extent=(np.amin(Wv), np.amax(Wv),np.amax(Wv), np.amin(Wv)))\\nplt.colorbar()\\nplt.show()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# Author: Alexandre Gramfort \\n#\\n# License: BSD-3-Clause\\n\\nimport matplotlib.pyplot as plt\\n\\nimport mne\\nfrom mne.datasets import sample\\nfrom mne.minimum_norm import apply_inverse_raw, read_inverse_operator\\n\\nprint(__doc__)\\n\\ndata_path = sample.data_path()\\nfname_inv = (\\n data_path \\/ 'MEG' \\/ 'sample' \\/ 'sample_audvis-meg-oct-6-meg-inv.fif')\\nfname_raw = data_path \\/ 'MEG' \\/ 'sample' \\/ 'sample_audvis_raw.fif'\\nlabel_name = 'Aud-lh'\\nfname_label = data_path \\/ 'MEG' \\/ 'sample' \\/ 'labels' \\/ f'{label_name}.label'\\n\\nsnr = 1.0 # use smaller SNR for raw data\\nlambda2 = 1.0 \\/ snr ** 2\\nmethod = \\\"sLORETA\\\" # use sLORETA method (could also be MNE or dSPM)\\n\\n# Load data\\nraw = mne.io.read_raw_fif(fname_raw)\\ninverse_operator = read_inverse_operator(fname_inv)\\nlabel = mne.read_label(fname_label)\\n\\nraw.set_eeg_reference('average', projection=True) # set average reference.\\nstart, stop = raw.time_as_index([0, 15]) # read the first 15s of data\\n\\n# Compute inverse solution\\nstc = apply_inverse_raw(raw, inverse_operator, lambda2, method, label,\\n start, stop, pick_ori=None)\\n\\n# Save result in stc files\\nstc.save('mne_%s_raw_inverse_%s' % (method, label_name), overwrite=True)\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nCompute sLORETA inverse solution on raw data\\nCompute sLORETA inverse solution on raw dataset restricted\\nto a brain label and stores the solution in stc files for\\nvisualisation.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"!type Simple.g4 \\n\\n!cat -n Simple.g4\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nThe file Simple.g4 contains a parser for the language described by the grammar Pure.g4. This parser returns\\nan abstract syntax tree. This tree is represented as a nested tuple.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"And this is header we are going to use:\\n\",\"targets\":\"header = \\\"\\\"\\\"\\\\\\n# BPH2CSV: %s\\n# Lines per burst: 2\\n# - timestamps (int64): in 12.5 ns units\\n# - stream (uint8): the photon stream according to the mapping {0: DexDem, 1: DexAem, 2: AexDem, 3: AemAem}\\n\\\"\\\"\\\" % filename\\nprint(header)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Dans cas pr\u00e9cis, on ne souhaite pas trier sur les nombres mais sur leur parit\u00e9. En quelque sorte, on ne s'int\u00e9resse pas de savoir dans quel ordre deux nombres pairs seront tri\u00e9s. Cela r\u00e9duit le nombre d'op\u00e9rations \u00e0 effectuer. Une id\u00e9e consiste \u00e0 parcourir le tableau par les deux bouts et \u00e0 \u00e9changer deux nombres d\u00e8s que leur parit\u00e9 sont mal class\u00e9es.\\n\",\"targets\":\"def trie_parite(l):\\n i = 0\\n j = len(l)-1\\n while i < j :\\n while i < j and l[i]%2 == 0 : i += 1\\n while i < j and l[j]%2 == 1 : j -= 1\\n if i < j:\\n ech = l[i]\\n l[i] = l[j]\\n l[j] = ech\\n i += 1\\n j -= 1\\n \\nl = l.copy()\\ntrie_parite(l)\\nl\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\".ipynb_checkpoints\\/Point cloud to HDF-checkpoint.ipynb\\\".\\nThe first task is:\\ntake a quick look at the data\\nCan you write Python code for it?\\n\",\"targets\":\"\\nfig = plt.figure(figsize = ([30\\/2.54, 6\\/2.54]))\\nax0 = fig.add_subplot(111) \\na0 = ax0.scatter(swath['Y'], swath['X'], c=swath['Z'] - np.min(swath['Z']), cmap = 'gist_earth',\\n vmin=0, vmax=10, edgecolors=None,lw=0, s=0.6)\\na1 = ax0.scatter(air_traj['Y'], air_traj['X'], c=air_traj['Z'], cmap = 'Reds',\\n lw=0, s=1)\\nplt.tight_layout()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"2.3. Triggering the workflow\\nPlease wait for 30-60 seconds before triggering the workflow at the first Airflow Dag import\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n!gcloud composer environments run {COMPOSER_NAME} \\\\\\n --location {REGION} unpause -- test_sklearn_mlflow\\n\\n!gcloud composer environments run {COMPOSER_NAME} \\\\\\n --location {REGION} trigger_dag -- test_sklearn_mlflow\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Customized Search Space\\nYou can customize your search space.\\nThe following figure shows the search space we want to define.\\n
    \\ngraph LR\\n id1(ImageInput) --> id2(Normalization)\\n id2 --> id3(Image Augmentation)\\n id3 --> id4(Convolutional)\\n id3 --> id5(ResNet V2)\\n id4 --> id6(Merge)\\n id5 --> id6\\n id7(StructuredDataInput) --> id8(CategoricalToNumerical)\\n id8 --> id9(DenseBlock)\\n id6 --> id10(Merge)\\n id9 --> id10\\n id10 --> id11(Classification Head)\\n id10 --> id12(Regression Head)\\n<\\/div>\\n\",\"targets\":\"input_node1 = ak.ImageInput()\\noutput_node = ak.Normalization()(input_node1)\\noutput_node = ak.ImageAugmentation()(output_node)\\noutput_node1 = ak.ConvBlock()(output_node)\\noutput_node2 = ak.ResNetBlock(version=\\\"v2\\\")(output_node)\\noutput_node1 = ak.Merge()([output_node1, output_node2])\\n\\ninput_node2 = ak.StructuredDataInput()\\noutput_node = ak.CategoricalToNumerical()(input_node2)\\noutput_node2 = ak.DenseBlock()(output_node)\\n\\noutput_node = ak.Merge()([output_node1, output_node2])\\noutput_node1 = ak.ClassificationHead()(output_node)\\noutput_node2 = ak.RegressionHead()(output_node)\\n\\nauto_model = ak.AutoModel(\\n inputs=[input_node1, input_node2],\\n outputs=[output_node1, output_node2],\\n overwrite=True,\\n max_trials=2,\\n)\\n\\nimage_data = np.random.rand(num_instances, 32, 32, 3).astype(np.float32)\\nstructured_data = np.random.rand(num_instances, 20).astype(np.float32)\\nregression_target = np.random.rand(num_instances, 1).astype(np.float32)\\nclassification_target = np.random.randint(5, size=num_instances)\\n\\nauto_model.fit(\\n [image_data, structured_data],\\n [classification_target, regression_target],\\n batch_size=32,\\n epochs=3,\\n)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Christiano-Fitzgerald approximate band-pass filter: Inflation and Unemployment\\nThe Christiano-Fitzgerald filter is a generalization of BK and can thus also be seen as weighted moving average. However, the CF filter is asymmetric about $t$ as well as using the entire series. The implementation of their filter involves the\\ncalculations of the weights in\\n$$y_{t}^{*}=B_{0}y_{t}+B_{1}y_{t+1}+\\\\dots+B_{T-1-t}y_{T-1}+\\\\tilde B_{T-t}y_{T}+B_{1}y_{t-1}+\\\\dots+B_{t-2}y_{2}+\\\\tilde B_{t-1}y_{1}$$\\nfor $t=3,4,...,T-2$, where\\n$$B_{j} = \\\\frac{\\\\sin(jb)-\\\\sin(ja)}{\\\\pi j},j\\\\geq1$$\\n$$B_{0} = \\\\frac{b-a}{\\\\pi},a=\\\\frac{2\\\\pi}{P_{u}},b=\\\\frac{2\\\\pi}{P_{L}}$$\\n$\\\\tilde B_{T-t}$ and $\\\\tilde B_{t-1}$ are linear functions of the $B_{j}$'s, and the values for $t=1,2,T-1,$ and $T$ are also calculated in much the same way. $P_{U}$ and $P_{L}$ are as described above with the same interpretation.\\nThe CF filter is appropriate for series that may follow a random walk.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nprint(sm.tsa.stattools.adfuller(dta[\\\"unemp\\\"])[:3])\\n\\nprint(sm.tsa.stattools.adfuller(dta[\\\"infl\\\"])[:3])\\n\\ncf_cycles, cf_trend = sm.tsa.filters.cffilter(dta[[\\\"infl\\\", \\\"unemp\\\"]])\\nprint(cf_cycles.head(10))\\n\\nfig = plt.figure(figsize=(14, 10))\\nax = fig.add_subplot(111)\\ncf_cycles.plot(ax=ax, style=[\\\"r--\\\", \\\"b-\\\"])\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Data Description\\nFor a quick glance of the features and their descriptions the reader is referred here.\\nhttps:\\/\\/github.com\\/rfordatascience\\/tidytuesday\\/blob\\/master\\/data\\/2020\\/2020-02-11\\/readme.md\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\ndataset = pd.read_csv('https:\\/\\/raw.githubusercontent.com\\/Sid-darthvader\\/DoWhy-The-Causal-Story-Behind-Hotel-Booking-Cancellations\\/master\\/hotel_bookings.csv')\\ndataset.head()\\n\\ndataset.columns\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"#import matplotlib.pyplot as plt\\n\\n# ...just a quick plot\\ndata.plot();\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nOk, now lets do the plot...\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"4 - pandas Basics\\/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb\\\".\\nThe first task is:\\nWe can import the date correctly by specifying the zero based look up or by specifying the name like we did in the last video.\\nCan you write Python code for it?\\n\",\"targets\":\"\\ndf = pd.read_csv('..\\/data\\/date_fixed_running_data.csv', parse_dates=[0])\\n\\ndf.head()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Participants\\/JCarlos\\/02_Parsing.ipynb\\\".\\nThe first task is:\\nPlot the interaction matrix and print a summary of the interactions:\\nCis interactions\\nCan you write Python code for it?\\n\",\"targets\":\"\\nfrom pytadbit.mapping.analyze import insert_sizes\\n\\ninsert_sizes('results\\/HindIII\\/03_filtering\\/reads12.tsv',show=True,nreads=100000)\\n\\nfrom pytadbit.mapping.filter import filter_reads\\n\\nfilter_reads('results\\/HindIII\\/03_filtering\\/reads12.tsv',max_molecule_length=750,min_dist_to_re=500)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# Heatmap to show the p-values of the cointegration test between each pair of\\n# stocks. Only show the value in the upper-diagonal of the heatmap\\n# (Just showing a '1' for everything in lower diagonal)\\n\\nscores, pvalues, pairs = find_cointegrated_pairs(securities_panel)\\nimport seaborn\\nseaborn.heatmap(pvalues, xticklabels=symbol_list, yticklabels=symbol_list, cmap='RdYlGn_r' \\n , mask = (pvalues >= 0.95)\\n )\\nprint pairs\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nNow we'll run our method on the list and see if any pairs are cointegrated.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"As it can be observed the uncertainty of the measurements in the 18th and 19th century was very high. Early data was collected by technicians using mercury thermometers, where any variation in the visit time impacted measurements. In the 1940s, the construction of airports caused many weather stations to be moved. In the 1980s, there was a move to electronic thermometers that are said to have a cooling bias. One can chose to ignore or give smaller weights to older, less reliable measurements. For the data exploitation part we will consider data from 1900 onward. \\n3. Data Exploration\\n3.1 Which countries are warmer?\\nWe now draw a map with the average temperature of each country over all years. This serves as a quick way to check that our data make sense. We can see that the warmest countries are the ones along the Equator and that the coldest countries are Greenland, Canada and Russia. Countries for which the data was missing are depicted as white. One can hover above counties to see their name and average temperatures.\\n\",\"targets\":\"countries = np.unique(ByCountry_clear['Country'])\\nmean_temp = []\\nfor country in countries:\\n mean_temp.append(ByCountry_clear[ByCountry_clear['Country'] == country]['AverageTemperature'].mean())\\n #when taking the mean the missing data are automatically ignored=>see data cleaning section\\n\\n#use choropleth map provided by pyplot\\n \\ndata = [ dict(\\n type = 'choropleth', \\n locations = countries,\\n z = mean_temp,\\n locationmode = 'country names',\\n text = countries,\\n colorbar = dict(autotick = True, tickprefix = '', \\n title = '\\\\n \u00b0C')\\n )\\n ]\\n\\nlayout = dict(\\n title = 'Average Temperature in Countries',\\n geo = dict(\\n showframe = False,\\n showocean = True,\\n oceancolor = 'rgb(0,255,255)',\\n ),\\n )\\n\\nfig = dict(data=data, layout=layout)\\npy.iplot(fig,validate=False)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Periodic_Trends.ipynb\\\".\\nThe first task is:\\nPART 3: Unstructured Coding\\n\\nUse and modify the section of code below to answer questions 8-11 in your coding booklet.\\nCan you write Python code for it?\\n\",\"targets\":\"\\ndata.Radioactive.count()\\n\\n# Set variables for scatter plot\\nx = data.Group\\ny = data.NumberofValence\\n\\nplt.scatter(x,y)\\nplt.title('Looking For Patterns')\\nplt.xlabel('x-axis')\\nplt.ylabel('y-axis')\\n\\n#this sets the interval on the x-axis\\nplt.xticks(np.arange(min(x), max(x)+1, 1.0))\\n\\n# This actually shows the plot\\nplt.show()\\n\\ndata[['AtomicNumber', 'Element', 'Type']].sort_values(by='AtomicNumber')\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"And we'll attach some dummy datasets. See Datasets for more details.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nb.add_dataset('orb', times=np.linspace(0,4,1000), dataset='orb01', component=['primary', 'secondary'])\\n\\ntimes, fluxes, sigmas = np.loadtxt('test.lc.in', unpack=True)\\n\\nb.add_dataset('lc', times=times, fluxes=fluxes, sigmas=sigmas, dataset='lc01')\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# Import required modules\\nfrom mtpy.imaging.plotstrike import PlotStrike\\nimport os\\n\\n# Path containing edi files\\nedi_path = r'C:\\/mtpywin\\/mtpy\\/examples\\/data\\/edi_files_2'\\n\\n# Full path to save path if you want to save the image\\nsavepath = r'C:\\/tmp'\\n\\n# Get full path to all files with the extension '.edi' in edi_path\\nedi_list = [os.path.join(edi_path,ff) for ff in os.listdir(edi_path) \\\\\\n if ff.endswith('.edi')]\\n\\n# make a plot (try also plot_type = 1 to plot by decade)\\nstrikeplot = PlotStrike(fn_list=edi_list, \\n plot_type=2,\\n plot_tipper='y')\\n\\n# save to file\\nstrikeplot.save_plot(savepath,\\n file_format='.png',\\n fig_dpi=400)\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nPlot strike for a collection of files\\nIn this example, we plot the strike as a rose plot for a collection of edi files. Strike is calculated from the Z invariants (after Weaver et al., 2000; 2003), the phase tensor (Caldwell et al. 2004) and the tipper. Plots can include all frequencies or be separated according to decade if you set plot_type = 1.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"4. Key Properties --> Nonoceanic Waters\\nNon oceanic waters treatement in ocean\\n4.1. Isolated Seas\\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\\nDescribe if\\/how isolated seas is performed\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n# PROPERTY ID - DO NOT EDIT ! \\nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') \\n\\n# PROPERTY VALUE: \\n# Set as follows: DOC.set_value(\\\"value\\\") \\n# TODO - please enter value(s)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Cal3-PythonGraphes.ipynb\\\".\\nThe first task is:\\n3.1 Surface plots\\nCan you write Python code for it?\\n\",\"targets\":\"\\nfig = plt.figure(figsize=(14,6))\\n\\n# `ax` is a 3D-aware axis instance because of the projection='3d' keyword argument to add_subplot\\nax = fig.add_subplot(1, 2, 1, projection='3d')\\n\\np = ax.plot_surface(X, Y, Z, rstride=4, cstride=4, linewidth=0)\\n\\n# surface_plot with color grading and color bar\\nax = fig.add_subplot(1, 2, 2, projection='3d')\\np = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=matplotlib.cm.coolwarm, linewidth=0, antialiased=False)\\ncb = fig.colorbar(p, shrink=0.5)\\nshow()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"04a_CorrectiveSolutions.ipynb\\\".\\nThe first task is:\\nStandard Solution\\nCan you write Python code for it?\\n\",\"targets\":\"\\nTx, Ty, Nz, Mx, My, Mz, F, ry, ry, mz = sympy.symbols('T_x T_y N_z M_x M_y M_z F r_y r_x m_z')\\n\\nS1.set_loads(_Tx=0, _Ty=Ty, _Nz=0, _Mx=Mx, _My=0, _Mz=Mz)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# GRADED FUNCTION: model\\n\\ndef model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False):\\n \\\"\\\"\\\"\\n Builds the logistic regression model by calling the function you've implemented previously\\n \\n Arguments:\\n X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train)\\n Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train)\\n X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test)\\n Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test)\\n num_iterations -- hyperparameter representing the number of iterations to optimize the parameters\\n learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize()\\n print_cost -- Set to true to print the cost every 100 iterations\\n \\n Returns:\\n d -- dictionary containing information about the model.\\n \\\"\\\"\\\"\\n \\n ### START CODE HERE ###\\n # initialize parameters with zeros (\u2248 1 line of code)\\n w, b = initialize_with_zeros(X_train.shape[0])\\n\\n # Gradient descent (\u2248 1 line of code)\\n parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost)\\n \\n # Retrieve parameters w and b from dictionary \\\"parameters\\\"\\n w = parameters[\\\"w\\\"]\\n b = parameters[\\\"b\\\"]\\n \\n # Predict test\\/train set examples (\u2248 2 lines of code)\\n Y_prediction_test = predict(w, b, X_test)\\n Y_prediction_train = predict(w, b, X_train)\\n\\n ### END CODE HERE ###\\n\\n # Print train\\/test Errors\\n print(\\\"train accuracy: {} %\\\".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100))\\n print(\\\"test accuracy: {} %\\\".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100))\\n\\n \\n d = {\\\"costs\\\": costs,\\n \\\"Y_prediction_test\\\": Y_prediction_test, \\n \\\"Y_prediction_train\\\" : Y_prediction_train, \\n \\\"w\\\" : w, \\n \\\"b\\\" : b,\\n \\\"learning_rate\\\" : learning_rate,\\n \\\"num_iterations\\\":...\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nExpected Output: \\n\\n \\n
    \\n **predictions**\\n <\\/td>\\n \\n [[ 1. 1.]]\\n <\\/td> \\n <\\/tr>\\n\\n<\\/table>\\n\\n\\nWhat to remember:\\nYou've implemented several functions that:\\n- Initialize (w,b)\\n- Optimize the loss iteratively to learn parameters (w,b):\\n - computing the cost and its gradient \\n - updating the parameters using gradient descent\\n- Use the learned (w,b) to predict the labels for a given set of examples\\n5 - Merge all functions into a model\\nYou will now see how the overall model is structured by putting together all the building blocks (functions implemented in the previous parts) together, in the right order.\\nExercise: Implement the model function. Use the following notation:\\n - Y_prediction for your predictions on the test set\\n - Y_prediction_train for your predictions on the train set\\n - w, costs, grads for the outputs of optimize()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Ajustes linealizados\\nUsando linealizaci\u00f3n y la funci\u00f3n linregress ajuste los siguientes datos, recuerde recuperar los factores y datos originales al terminar el ajuste lineal, realize el gr\u00e1fico lado a lado con el ajuste lineal y el ajuste de los datos originales.\\n\",\"targets\":\"x = np.linspace(0, 10)\\ny = 3.0 * x ** 2\\ny += np.random.normal(0, 0.1, size=y.shape)\\n\\n# Empiece a construir el gr\u00e1fico y el ajuste en esta celda\\n\\n\\nx = np.linspace(0, 10)\\ny = 3.0 * np.sqrt(x)\\ny += y * np.random.normal(0, 0.05, size=y.shape)\\n\\n# Empiece a construir el gr\u00e1fico y el ajuste en esta celda\\n\\n\\nx = np.linspace(0, 10)\\ny = 3.0 * x ** 0.75\\ny += np.random.normal(0, 0.1, size=y.shape)\\n\\n# Empiece a construir el gr\u00e1fico y el ajuste en esta celda\\n\\n\\nx = np.linspace(0, 10)\\ny = np.sqrt(5 * x)\\ny += y * np.random.normal(0, 0.05, size=y.shape)\\n\\n# Empiece a construir el gr\u00e1fico y el ajuste en esta celda\\n\\n\\nx = np.linspace(0.1, 10)\\ny = np.log(5 * x)\\ny += y * np.random.normal(0, 0.05, size=y.shape)\\n\\n# Empiece a construir el gr\u00e1fico y el ajuste en esta celda\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"And now let's use PCA to identify the principle components:\\n\",\"targets\":\"from sklearn.decomposition import PCA\\npca = PCA(n_components=64).fit(X_dig)\\nX_reduced =pca.transform(X_dig)\\nprint(X_reduced.shape)\\n\\nplt.imshow(X_reduced[np.argsort(y_dig),], aspect=0.02, cmap=plt.cm.gray_r)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Advanced Skew-T Plotting\\nFiducial lines indicating dry adiabats, moist adiabats, and mixing ratio are\\nuseful when performing further analysis on the Skew-T diagram. Often the\\n0C isotherm is emphasized and areas of CAPE and CIN are shaded.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n# Create a new figure. The dimensions here give a good aspect ratio\\nfig = plt.figure(figsize=(9, 9))\\nskew = SkewT(fig, rotation=30)\\n\\n# Plot the data using normal plotting functions, in this case using\\n# log scaling in Y, as dictated by the typical meteorological plot\\nskew.plot(p, T, 'r')\\nskew.plot(p, Td, 'g')\\nskew.plot_barbs(p, u, v)\\nskew.ax.set_ylim(1000, 100)\\nskew.ax.set_xlim(-40, 60)\\n\\n# Plot LCL temperature as black dot\\nskew.plot(lcl_pressure, lcl_temperature, 'ko', markerfacecolor='black')\\n\\n# Plot the parcel profile as a black line\\nskew.plot(p, parcel_prof, 'k', linewidth=2)\\n\\n# Shade areas of CAPE and CIN\\nskew.shade_cin(p, T, parcel_prof, Td)\\nskew.shade_cape(p, T, parcel_prof)\\n\\n# Plot a zero degree isotherm\\nskew.ax.axvline(0, color='c', linestyle='--', linewidth=2)\\n\\n# Add the relevant special lines\\nskew.plot_dry_adiabats()\\nskew.plot_moist_adiabats()\\nskew.plot_mixing_lines()\\n\\n# Show the plot\\nplt.show()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"

    Transform with HMM as scoring criteria<\\/h3>\\n\",\"targets\":\"meme2 = Meme(alphabet=\\\"dna\\\", scoring_criteria=\\\"hmm\\\", k=1, threshold=1.0,mod=\\\"anr\\\", nmotifs=3, minw=7, maxw=9)\\nmatches = meme2.fit_transform(fasta_file=\\\"seq9.fa\\\", return_match=True)\\nfor m in matches: print m\\n\\n%%time\\n# Markov Model score\\nmm_score = meme2.score(motif_num=2, seq=\\\"ACGT\\\"*10)\\nprint mm_score\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Compute permutation statistic\\nHow does it work? We use clustering to \\\"bind\\\" together features which are\\nsimilar. Our features are the magnetic fields measured over our sensor\\narray at different times. This reduces the multiple comparison problem.\\nTo compute the actual test-statistic, we first sum all F-values in all\\nclusters. We end up with one statistic for each cluster.\\nThen we generate a distribution from the data by shuffling our conditions\\nbetween our samples and recomputing our clusters and the test statistics.\\nWe test for the significance of a given cluster by computing the probability\\nof observing a cluster of that size\\n:footcite:MarisOostenveld2007,Sassenhagen2019.\\n\",\"targets\":\"# We are running an F test, so we look at the upper tail\\n# see also: https:\\/\\/stats.stackexchange.com\\/a\\/73993\\ntail = 1\\n\\n# We want to set a critical test statistic (here: F), to determine when\\n# clusters are being formed. Using Scipy's percent point function of the F\\n# distribution, we can conveniently select a threshold that corresponds to\\n# some alpha level that we arbitrarily pick.\\nalpha_cluster_forming = 0.001\\n\\n# For an F test we need the degrees of freedom for the numerator\\n# (number of conditions - 1) and the denominator (number of observations\\n# - number of conditions):\\nn_conditions = len(event_id)\\nn_observations = len(X[0])\\ndfn = n_conditions - 1\\ndfd = n_observations - n_conditions\\n\\n# Note: we calculate 1 - alpha_cluster_forming to get the critical value\\n# on the right tail\\nf_thresh = scipy.stats.f.ppf(1 - alpha_cluster_forming, dfn=dfn, dfd=dfd)\\n\\n# run the cluster based permutation analysis\\ncluster_stats = spatio_temporal_cluster_test(X, n_permutations=1000,\\n threshold=f_thresh, tail=tail,\\n n_jobs=1, buffer_size=None,\\n adjacency=adjacency)\\nF_obs, clusters, p_values, _ = cluster_stats\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"As we can see, the above example was kind of unfortunate, as the approximation fails badly. We check the type I error to verify that. This works similar to sampling the alternative distribution: re-sample data (assuming infinite amounts), perform the test and average results. Below we compare type I errors or all methods for approximating the null distribution. This will take a while.\\n\",\"targets\":\"# type I error is false alarm, therefore sample data under H0\\nnum_trials=50\\nrejections_gamma=zeros(num_trials)\\nrejections_spectrum=zeros(num_trials)\\nrejections_bootstrap=zeros(num_trials)\\nnum_samples=50\\nalpha=0.05\\nfor i in range(num_trials):\\n X=norm.rvs(size=n, loc=mu, scale=sigma2)\\n Y=laplace.rvs(size=n, loc=mu, scale=b)\\n \\n # simulate H0 via merging samples before computing the \\n Z=hstack((X,Y))\\n X=Z[:len(X)]\\n Y=Z[len(X):]\\n feat_p=sg.RealFeatures(reshape(X, (1,len(X))))\\n feat_q=sg.RealFeatures(reshape(Y, (1,len(Y))))\\n \\n # gamma\\n mmd=sg.QuadraticTimeMMD(feat_p, feat_q)\\n mmd.set_kernel(kernel)\\n mmd.set_null_approximation_method(sg.NAM_MMD2_GAMMA)\\n mmd.set_statistic_type(sg.ST_BIASED_FULL) \\n rejections_gamma[i]=mmd.perform_test(alpha)\\n \\n # spectrum\\n mmd=sg.QuadraticTimeMMD(feat_p, feat_q)\\n mmd.set_kernel(kernel)\\n mmd.set_null_approximation_method(sg.NAM_MMD2_SPECTRUM)\\n mmd.spectrum_set_num_eigenvalues(num_eigen)\\n mmd.set_num_null_samples(num_samples)\\n mmd.set_statistic_type(sg.ST_BIASED_FULL)\\n rejections_spectrum[i]=mmd.perform_test(alpha)\\n \\n # bootstrap (precompute kernel)\\n mmd=sg.QuadraticTimeMMD(feat_p, feat_q)\\n p_and_q=mmd.get_p_and_q()\\n kernel.init(p_and_q, p_and_q)\\n precomputed_kernel=sg.CustomKernel(kernel)\\n mmd.set_kernel(precomputed_kernel)\\n mmd.set_null_approximation_method(sg.NAM_PERMUTATION)\\n mmd.set_num_null_samples(num_samples)\\n mmd.set_statistic_type(sg.ST_BIASED_FULL)\\n rejections_bootstrap[i]=mmd.perform_test(alpha)\\n\\nconvergence_gamma=cumsum(rejections_gamma)\\/(arange(num_trials)+1)\\nconvergence_spectrum=cumsum(rejections_spectrum)\\/(arange(num_trials)+1)\\nconvergence_bootstrap=cumsum(rejections_bootstrap)\\/(arange(num_trials)+1)\\n\\nprint \\\"Average rejection rate of H0 for Gamma is %.2f\\\" % mean(convergence_gamma)\\nprint \\\"Average rejection rate of H0 for Spectrum is %.2f\\\" % mean(convergence_spectrum)\\nprint \\\"Average rejection rate of H0 for Bootstrapping is %.2f\\\" % mean(rejections_bootstrap)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Before we do anything, we should load the data:\\n\",\"targets\":\"garcia = pd.read_csv(\\\"garcia.csv\\\")\\ngarcia.head()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Create PUDL output object\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\npudl_settings = pudl.workspace.setup.get_defaults()\\ndisplay(pudl_settings)\\n\\nferc1_engine = sa.create_engine(pudl_settings['ferc1_db'])\\npudl_engine = sa.create_engine(pudl_settings['pudl_db'])\\n\\nAPI_KEY_EIA = os.environ[\\\"API_KEY_EIA\\\"]\\n\\npudl_out = pudl.output.pudltabl.PudlTabl(pudl_engine)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"(2b) Multiplica\u00e7\u00e3o elemento-a-elemento e produto interno \\nA multiplica\u00e7\u00e3o elemento-a-elemento \u00e9 calculada como: $$ \\\\mathbf{x} \\\\odot \\\\mathbf{y} = \\\\begin{bmatrix} x_1 y_1 \\\\\\\\ x_2 y_2 \\\\\\\\ \\\\vdots \\\\\\\\ x_n y_n \\\\end{bmatrix} $$\\nE o do produto interno de dois vetores de mesmo tamanho $ n $: $$ \\\\mathbf{w} \\\\cdot \\\\mathbf{x} = \\\\sum_{i=1}^n w_i x_i $$\\nEm alguns livros voc\u00ea tamb\u00e9m v\u00ea $ \\\\mathbf{w} \\\\cdot \\\\mathbf{x} $ escrito como $ \\\\mathbf{w}^\\\\top \\\\mathbf{x} $\\nO tipo Numpy Array suporta essas duas opera\u00e7\u00f5es, ao utilizar o operador * para multiplicar dois vetores ou matrizes, ele executar\u00e1 a multiplica\u00e7\u00e3o elemento-a-elemento. Para realizar o produto interno voc\u00ea pode utilizar tanto a fun\u00e7\u00e3o np.dot() ou ndarray.dot(). Ex.: dados os vetores $x$ e $y$ pode realizar a opera\u00e7\u00e3o como np.dot(x,y) ou x.dot(y).\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n# EXERCICIO\\n# A fun\u00e7\u00e3o np.arange(inicio,fim,passo) cria uma lista iniciando em inicio, terminando antes do fim seguindo passo\\nu = np.arange(0, 5, .5) # np.array([0,0.5,1.0,...,4.5])\\nv = np.arange(5, 10, .5)\\n\\nelementoAelemento = u*v#\\nprodInterno = u.dot(v)#\\nprint 'u: {0}'.format(u)\\nprint 'v: {0}'.format(v)\\nprint '\\\\nelementoAelemento\\\\n{0}'.format(elementoAelemento)\\nprint '\\\\nprodInterno\\\\n{0}'.format(prodInterno)\\n\\n# TESTE do exerc\u00edcio (2b)\\nassert np.all(elementoAelemento == [ 0., 2.75, 6., 9.75, 14., 18.75, 24., 29.75, 36., 42.75]), \\\"Valores incorretos para elementoAelemento\\\"\\nprint \\\"Primeiro teste OK\\\"\\nassert prodInterno==183.75, \\\"Valor incorreto para prodInterno\\\"\\nprint \\\"Segundo teste OK\\\"\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Note how the stemmer recognizes \\\"runner\\\" as a noun, not a verb form or participle. Also, the adverbs \\\"easily\\\" and \\\"fairly\\\" are stemmed to the unusual root \\\"easili\\\" and \\\"fairli\\\"<\\/font>\\n\\nSnowball Stemmer\\nThis is somewhat of a misnomer, as Snowball is the name of a stemming language developed by Martin Porter. The algorithm used here is more acurately called the \\\"English Stemmer\\\" or \\\"Porter2 Stemmer\\\". It offers a slight improvement over the original Porter stemmer, both in logic and speed. Since nltk uses the name SnowballStemmer, we'll use it here.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nfrom nltk.stem.snowball import SnowballStemmer\\n\\n# The Snowball Stemmer requires that you pass a language parameter\\ns_stemmer = SnowballStemmer(language='english')\\n\\nwords = ['run','runner','running','ran','runs','easily','fairly']\\n# words = ['generous','generation','generously','generate']\\n\\nfor word in words:\\n print(word+' --> '+s_stemmer.stem(word))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Build the Neural Network\\nYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:\\n- model_inputs\\n- process_decoding_input\\n- encoding_layer\\n- decoding_layer_train\\n- decoding_layer_infer\\n- decoding_layer\\n- seq2seq_model\\nInput\\nImplement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\\n\\nInput text placeholder named \\\"input\\\" using the TF Placeholder name parameter with rank 2.\\nTargets placeholder with rank 2.\\nLearning rate placeholder with rank 0.\\nKeep probability placeholder named \\\"keep_prob\\\" using the TF Placeholder name parameter with rank 0.\\n\\nReturn the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\ndef model_inputs():\\n \\\"\\\"\\\"\\n Create TF Placeholders for input, targets, and learning rate.\\n :return: Tuple (input, targets, learning rate, keep probability)\\n \\\"\\\"\\\"\\n # TODO: Implement Function\\n input = tf.placeholder(tf.int32, [None, None], name=\\\"input\\\")\\n targets = tf.placeholder(tf.int32, [None, None])\\n learning_rate = tf.placeholder(tf.float32)\\n keep_prob = tf.placeholder(tf.float32, name=\\\"keep_prob\\\")\\n return input, targets, learning_rate, keep_prob\\n\\n\\\"\\\"\\\"\\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\\n\\\"\\\"\\\"\\ntests.test_model_inputs(model_inputs)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"<\\/a>\\n

    Analyse Results <\\/h2>\\n\\ncompare the training loss for each activation\\n\",\"targets\":\"plt.plot(training_results_tanch['training_loss'],label='tanh')\\nplt.plot(training_results['training_loss'],label='sim')\\nplt.plot(training_results_relu['training_loss'],label='relu')\\nplt.ylabel('loss')\\nplt.title('training loss iterations')\\nplt.legend()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"A 'decimal' in binary is challenging to think about, because each integer location is a $1 \\/ 2^{n}$, where $n$ is the location. That means to represent a binary-mantissa exacly, its denominator must be a power of $2$. For example:\\n\",\"targets\":\"ffrac_ltx(0.5)\\n\\nffrac_ltx(0.75)\\n\\nffrac_ltx(0.5232)\\n\\n#This is the maximum number of bits used in the 0.1 Mantissa as a string.\\nprint(ffrac(0.1, 100))\\n\\nprint('{:0.28f}'.format(0.1))\\n\\n0.1 + 0.2 == (1.0 + 2.0) \\/ 10.0\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Changing attributes of the colorbar\\nThe ax we return is the same as fig.axes[0]. To get to the colorbar, use fig.axes[1].\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nfig, ax = traj_contacts.residue_contacts.plot()\\n# Scale the labels of the colorbar.\\nfig.axes[1].yaxis.set_tick_params(labelsize=20)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# NBVAL_SKIP\\nmol_sets_to_compare = ((('oe_mols_from_3d', oe_mols_from_3d), \\n ('rdk_mols_from_3d', rdk_mols_from_3d)),\\n #(('oe_mols_from_smiles', oe_mols_from_smiles), \\n # ('rdk_mols_from_smiles', rdk_mols_from_smiles)),\\n (('oe_mols_from_3d', oe_mols_from_3d), \\n ('rdk_mols_from_smiles_from_oe_mols_from_3d', rdk_mols_from_smiles_from_oe_mols_from_3d)),\\n (('oe_mols_from_smiles_from_rdk_mols_from_3d', oe_mols_from_smiles_from_rdk_mols_from_3d),\\n ('rdk_mols_from_3d', rdk_mols_from_3d)),\\n )\\n\\n\\nfor (set_1_name, mol_set_1), (set_2_name, mol_set_2) in mol_sets_to_compare:\\n set_1_name_to_mol = {mol.name:mol for mol in mol_set_1 if not(mol is None)}\\n set_2_name_to_mol = {mol.name:mol for mol in mol_set_2 if not(mol is None)}\\n names_in_common = set(set_1_name_to_mol.keys()) & set(set_2_name_to_mol.keys())\\n print()\\n print()\\n print()\\n print('There are {} molecules in the {} set'.format(len(mol_set_1), set_1_name))\\n print('There are {} molecules in the {} set'.format(len(mol_set_2), set_2_name))\\n print('These sets have {} molecules in common'.format(len(names_in_common)))\\n \\n graph_matches = 0\\n rdk_smiles_matches = 0\\n oe_smiles_matches = 0\\n \\n errored_graph_comparisons = 0\\n errored_rdk_smiles_comparisons = 0\\n errored_oe_smiles_comparisons = 0\\n \\n for name in names_in_common:\\n set_1_mol = set_1_name_to_mol[name]\\n set_2_mol = set_2_name_to_mol[name]\\n nx_match = compare_mols_using_nx(set_1_mol, set_2_mol)\\n if nx_match:\\n graph_matches += 1\\n \\n try:\\n rdk_smi_match = compare_mols_using_smiles(set_1_mol, set_2_mol, \\n RDKTKW, \\n 'OE--(RDKTKW)-->SMILES: ','RDK--(RDKTKW)-->SMILES:',\\n ...\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nAnd perform the actual comparisons\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"LA_Team\\/Facies_classification_LA_TEAM_05_VALIDATION.ipynb\\\".\\nThe first task is:\\nPrediction\\nCan you write Python code for it?\\n\",\"targets\":\"\\n#Load testing data\\ntest_data = pd.read_csv('..\\/validation_data_nofacies.csv')\\n\\n # Train classifier\\n #clf = make_pipeline(make_union(VotingClassifier([(\\\"est\\\", ExtraTreesClassifier(criterion=\\\"gini\\\", max_features=1.0, n_estimators=500))]), FunctionTransformer(lambda X: X)), XGBClassifier(learning_rate=0.73, max_depth=10, min_child_weight=10, n_estimators=500, subsample=0.27))\\n #clf = make_pipeline( KNeighborsClassifier(n_neighbors=5, weights=\\\"distance\\\") ) \\n #clf = make_pipeline(MaxAbsScaler(),make_union(VotingClassifier([(\\\"est\\\", RandomForestClassifier(n_estimators=500))]), FunctionTransformer(lambda X: X)),ExtraTreesClassifier(criterion=\\\"entropy\\\", max_features=0.0001, n_estimators=500))\\n # * clf = make_pipeline( make_union(VotingClassifier([(\\\"est\\\", BernoulliNB(alpha=60.0, binarize=0.26, fit_prior=True))]), FunctionTransformer(lambda X: X)),RandomForestClassifier(n_estimators=500))\\n\\n# # Prepare training data\\n# X_tr = X\\n# y_tr = y\\n\\n# # Augment features\\n# X_tr, padded_rows = augment_features(X_tr, well, depth)\\n\\n# # Removed padded rows\\n# X_tr = np.delete(X_tr, padded_rows, axis=0)\\n# y_tr = np.delete(y_tr, padded_rows, axis=0) \\n\\n# Prepare test data\\nwell_ts = test_data['Well Name'].values\\ndepth_ts = test_data['Depth'].values\\nX_ts = test_data[feature_names].values\\n\\n\\n \\ny_pred = []\\nprint('.' * 100)\\nfor seed in range(100):\\n np.random.seed(seed)\\n\\n # Make training data.\\n X_train, padded_rows = augment_features(X, well, depth)\\n y_train = y\\n X_train = np.delete(X_train, padded_rows, axis=0)\\n y_train = np.delete(y_train, padded_rows, axis=0) \\n\\n #\u00a0Train classifier \\n clf = make_pipeline(XGBClassifier(learning_rate=0.12,\\n max_depth=3,\\n min_child_weight=10,\\n n_estimators=150,\\n seed=seed,\\n colsample_bytree=0.9))\\n\\n # Make blind data.\\n X_test, _ = augment_features(X_ts, well_ts, depth_ts)\\n\\n # Train...\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"FDMS\\/TME3\\/Submit_V0.ipynb\\\".\\nThe first task is:\\nStrangely we notice that the less observations there is, the more it rains on average\\nHowever more of the expected rainfall fall below 0.5\\nWhat prediction should we make if there is no data?\\nCan you write Python code for it?\\n\",\"targets\":\"\\nprint(\\\"%d observations\\\" %(len(raw)))\\n#print(\\\"%d fully filled, %d partly filled, %d fully empty\\\"\\n# %(len(noAnyNan),len(noFullNan),len(raw)-len(noFullNan)))\\nprint(\\\"%0.1f%% fully filled, %0.1f%% partly filled, %0.1f%% fully empty\\\"\\n %(len(noAnyNan)\\/float(len(raw))*100,\\n len(noFullNan)\\/float(len(raw))*100,\\n (len(raw)-len(noFullNan))\\/float(len(raw))*100))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python\\/00-Crash-Course-Basics\\/04-Crash-Course-Review-Exercises.ipynb\\\".\\nThe first task is:\\nSet Numpy's Random Seed to 101\\nCan you write Python code for it?\\n\",\"targets\":\"\\nnp.random.seed(101)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"1.1.4 The Model: Linear Regressor\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nfrom sklearn.linear_model import LinearRegression\\nimport numpy as np\\nfrom sklearn.metrics import mean_squared_error\\n\\nlin_reg = LinearRegression()\\nlin_reg.fit(X_train, y_train)\\n\\ny_pred_train_lin = lin_reg.predict(X_train)\\n\\nlin_rmse_train = np.sqrt(mean_squared_error(y_pred_train_lin, y_train))\\n\\nlin_rmse_train\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Code\\/SQL\\/SQL_Bootcamp_Stern_2016.ipynb\\\".\\nThe first task is:\\nUsing CASE WHEN to create a pivot table\\nSay you want to see revenue broken out by gender and by date. Right now, the only way we know how to do this is to add \\\"date\\\" to the GROUP BY clause. The query below is the same as the one from your last challenge, only we've added date to both the SELECT clause and the GROUP BY clause.\\nCan you write Python code for it?\\n\",\"targets\":\"\\ncase_pivot_ex = widgets.Textarea(value='''SELECT\\n date,\\n CASE WHEN\\n B.author = 'Austen' THEN 'Female'\\n ELSE 'Male'\\n END AS gender,\\n SUM(S.revenue) AS total_revenue\\nFROM\\n book_table B \\n JOIN sales_table S ON B.book = S.book\\nGROUP BY \\n date, gender'''\\n, width = '50em', height = '18em')\\ndisplay(case_pivot_ex) \\n\\ncase_pivot_ex_b = widgets.Button(description='Run', width='10em', height='2.5em', color='white',background_color='black', border_color='black')\\ndisplay(case_pivot_ex_b)\\nrun_q(case_pivot_ex, case_pivot_ex_b)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"(b)\\nPomo\u0107u funkcije datasets.make_classification generirajte dva skupa podataka od $N=200$ primjera: jedan s $n=2$ dimenzije i drugi s $n=100$ dimenzija. Primjeri neka dolaze iz dviju klasa, s time da svakoj klasi odgovaraju dvije grupe (n_clusters_per_class=2), kako bi problem bio ne\u0161to slo\u017eeniji, tj. nelinearniji. Neka sve zna\u010dajke budu informativne. Podijelite skup primjera na skup za u\u010denje i skup za ispitivanje u omjeru 1:1.\\nNa oba skupa optimirajte SVM s jezgrenom funkcijom RBF, u re\u0161etci $C\\\\in{2^{-5},2^{-4},\\\\dots,2^{15}}$ i $\\\\gamma\\\\in{2^{-15},2^{-14},\\\\dots,2^{3}}$. Prika\u017eite povr\u0161inu pogre\u0161ke modela na skupu za u\u010denje i skupu za provjeru, i to na oba skupa podataka (ukupno \u010detiri grafikona) te ispi\u0161ite optimalne kombinacije hiperparametara. Za prikaz povr\u0161ine pogre\u0161ke modela mo\u017eete koristiti funkciju mlutils.plot_error_surface.\\n\",\"targets\":\"from sklearn.datasets import make_classification\\nfrom sklearn.model_selection import train_test_split\\n\\n# Va\u0161 k\u00f4d ovdje\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"And the result:\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nsites_left = sites.copy()\\norg = 'org'\\ntour=[]\\ntour.append(sites_left.pop( sites_left.index(org)))\\n\\nwhile len(sites_left) > 0:\\n \\n for k in sites_left:\\n if x[(org,k)].varValue ==1:\\n tour.append( sites_left.pop( sites_left.index(k)))\\n org=k\\n break\\n \\ntour.append('org')\\n\\ntour_legs = [distances[(tour[i-1], tour[i])] for i in range(1,len(tour))]\\n\\nprint('Found optimal tour!')\\nprint(' -> '.join(tour))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Again, we could use EFetch to obtain more information for each of these\\njournal IDs.\\nESearch has many useful options \u2014 see the ESearch help\\npage\\nfor more information.\\nEPost: Uploading a list of identifiers\\nEPost uploads a list of UIs for use in subsequent search strategies; see\\nthe EPost help\\npage\\nfor more information. It is available from Biopython through the\\nBio.Entrez.epost() function.\\nTo give an example of when this is useful, suppose you have a long list\\nof IDs you want to download using EFetch (maybe sequences, maybe\\ncitations \u2013 anything). When you make a request with EFetch your list of\\nIDs, the database etc, are all turned into a long URL sent to the\\nserver. If your list of IDs is long, this URL gets long, and long URLs\\ncan break (e.g. some proxies don\u2019t cope well).\\nInstead, you can break this up into two steps, first uploading the list\\nof IDs using EPost (this uses an \u201cHTML post\u201d internally, rather than an\\n\u201cHTML get\u201d, getting round the long URL problem). With the history\\nsupport, you can then refer to this long list of IDs, and download the\\nassociated data with EFetch.\\nLet\u2019s look at a simple example to see how EPost works \u2013 uploading some\\nPubMed identifiers:\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nfrom Bio import Entrez\\nEntrez.email = \\\"A.N.Other@example.com\\\" # Always tell NCBI who you are\\nid_list = [\\\"19304878\\\", \\\"18606172\\\", \\\"16403221\\\", \\\"16377612\\\", \\\"14871861\\\", \\\"14630660\\\"]\\nprint(Entrez.epost(\\\"pubmed\\\", id=\\\",\\\".join(id_list)).read())\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"

    \\n \u05d1\u05e7\u05d5\u05d3 \u05e9\u05dc\u05de\u05e2\u05dc\u05d4 \u05d9\u05e6\u05e8\u05e0\u05d5 \u05e8\u05e9\u05d9\u05de\u05d4 \u05e8\u05d9\u05e7\u05d4, \u05e9\u05d0\u05d5\u05ea\u05d4 \u05e0\u05de\u05dc\u05d0 \u05d1\u05de\u05e9\u05ea\u05de\u05e9\u05d9\u05dd \u05e9\u05e0\u05d2\u05e0\u05d5\u05d1<\\/strike> \u05e9\u05e0\u05e9\u05d0\u05d9\u05dc \u05de\u05e1\u05e0\u05d9\u05d9\u05dc\u05e6'\u05d0\u05d8.
    \\n \u05e0\u05e2\u05d1\u05d9\u05e8 \u05d0\u05ea \u05d4\u05e4\u05e8\u05d8\u05d9\u05dd \u05e9\u05dc \u05db\u05dc \u05d0\u05d7\u05d3 \u05de\u05d4\u05de\u05e9\u05ea\u05de\u05e9\u05d9\u05dd \u05d4\u05de\u05d5\u05e4\u05d9\u05e2\u05d9\u05dd \u05d1\u05besnailchat_users<\\/var>, \u05dc\u05be__init__<\\/code> \u05e9\u05dc User<\\/var>,
    \\n \u05d5\u05e0\u05e6\u05e8\u05e3 \u05d0\u05ea \u05d4\u05de\u05d5\u05e4\u05e2 \u05d4\u05d7\u05d3\u05e9 \u05e9\u05e0\u05d5\u05e6\u05e8 \u05dc\u05ea\u05d5\u05da \u05d4\u05e8\u05e9\u05d9\u05de\u05d4 \u05d4\u05d7\u05d3\u05e9\u05d4 \u05e9\u05d9\u05e6\u05e8\u05e0\u05d5.\\n<\\/p>\\n\\n

    \\n \u05e2\u05db\u05e9\u05d9\u05d5 \u05d4\u05e8\u05e9\u05d9\u05de\u05d4 our_users<\\/var> \u05d4\u05d9\u05d0 \u05e8\u05e9\u05d9\u05de\u05d4 \u05dc\u05db\u05dc \u05d3\u05d1\u05e8, \u05e9\u05db\u05d5\u05dc\u05dc\u05ea \u05d0\u05ea \u05db\u05dc \u05d4\u05de\u05e9\u05ea\u05de\u05e9\u05d9\u05dd \u05d4\u05d7\u05d3\u05e9\u05d9\u05dd \u05e9\u05d4\u05e6\u05d8\u05e8\u05e4\u05d5 \u05dc\u05e8\u05e9\u05ea \u05d4\u05d7\u05d1\u05e8\u05ea\u05d9\u05ea \u05e9\u05dc\u05e0\u05d5:\\n<\\/p>\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nprint(our_users[0])\\nprint(our_users[1])\\nprint(our_users[2])\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"2. Prepare training and testing data\\nEach flower in this dataset contains the following features and labels\\n* features - measurements of the flower petals and sepals\\n* labels - the flower species (setosa, versicolor, or virginica) represented as a 0, 1, or 2.\\nOur train_test_split function will seperate the data as follows\\n* (features_train, labels_train) - 80% of the data prepared for training\\n* (features_test, labels_test) - 20% of the data prepared for making our predictions and evaluating our model\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n#Import train_test_split\\nfrom sklearn.model_selection import train_test_split\\n\\nfeatures_train, features_test, labels_train, labels_test = train_test_split(iris.data,iris.target,test_size=0.2,random_state=1)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"A mock scaling function\\nIn this tutorial, we will demonstrate the pyfssa routines with a mock scaling function\\n\\\\begin{equation}\\n\\\\tilde{f}(x) = e^{-(x+1)^2}\\n\\\\end{equation}\\n\",\"targets\":\"def mock_scaling_f(x):\\n \\\"\\\"\\\"Mock scaling function\\\"\\\"\\\"\\n return np.exp(-(x + 1.0)**2)\\n\\nx = np.linspace(-4.0, 2.0, num=200)\\n\\nfig, ax = plt.subplots()\\nax.plot(x, mock_scaling_f(x), label=r'$\\\\tilde{f}(x)$', rasterized=True)\\nax.set_xbound(x.min(), x.max())\\nax.set_ybound(0.0, 1.1)\\nax.set_xlabel(r'$x$')\\nax.legend()\\nplt.show()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"07\\/pandas_cheatsheet.ipynb\\\".\\nThe first task is:\\nSave the cats to a separate variable called \\\"cats.\\\" Save the dogs to a separate variable called \\\"dogs.\\\"\\nCan you write Python code for it?\\n\",\"targets\":\"\\ndogs = df[df['animal'] == 'dog']\\ncats = df[df['animal'] == 'cat']\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"%matplotlib inline\\n\\nimport phoebe\\n\\nlogger = phoebe.logger()\\n\\nb = phoebe.default_binary()\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nAs always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Carbon intensity of electricity [kg CO2\\/kWh]\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n#C_CO2 #carbon dioxide content of coal [kg\\/MJ]\\n#Elec_CO2 #carbon intensity of electricity [kg CO2\\/kWh]\\nvar('Elec_CO2 C_CO2');\\nElec_CO2_eq=Eq(Elec_CO2,C_CO2*3.6\\/eta)\\nElec_CO2_eq\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\n\\n
    \\n \\n \\\"Colab Run in Colab\\n <\\/a>\\n <\\/td>\\n \\n \\n \\\"GitHub\\n View on GitHub\\n <\\/a>\\n <\\/td>\\n \\n \\n \\\"Vertex\\n Open in Vertex AI Workbench\\n <\\/a>\\n <\\/td> \\n<\\/table>\\n\\nOverview\\nThis tutorial demonstrates how to use the TabNet algorithm on the Vertex AI platform.\\nTabNet combines the best of two worlds: it is explainable (similar to simpler tree-based models) while benefiting from high performance (similar to deep neural networks). This makes it great for retailers, finance and insurance industry applications such as predicting credit scores, fraud detection and forecasting. \\nTabNet uses a machine learning technique called sequential attention to select which model features to reason from at each step in the model. This mechanism makes it possible to explain how the model arrives at its predictions and helps it learn more accurate models. Thanks to this design, TabNet not only outperforms other neural networks and decision trees but also provides interpretable feature attributions. Releasing TabNet as a First Party Trainer in Vertex AI means you'll be able to...\\n\",\"targets\":\"import os\\n\\n# The Google Cloud Notebook product has specific requirements\\nIS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists(\\\"\\/opt\\/deeplearning\\/metadata\\/env_version\\\")\\n\\n# Google Cloud Notebook requires dependencies to be installed with '--user'\\nUSER_FLAG = \\\"\\\"\\nif IS_GOOGLE_CLOUD_NOTEBOOK:\\n USER_FLAG = \\\"--user\\\"\\n\\n! pip3 install {USER_FLAG} --upgrade tensorflow\\n! pip3 install {USER_FLAG} --upgrade google-cloud-aiplatform tensorboard-plugin-profile\\n! gcloud components update --quiet\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Advanced solution: StarSpace embeddings\\nNow you are ready to train your own word embeddings! In particular, you need to train embeddings specially for our task of duplicates detection. Unfortunately, StarSpace could not be run on Windows and we recommend to use provided\\ndocker container or other alternatives.\\nHow it works and what's the main difference with word2vec?\\nThe main point in this section is that StarSpace can be trained specifically for some tasks. In contrast to word2vec model, which tries to train similar embeddings for words in similar contexts, StarSpace uses embeddings for the whole sentence (just as a sum of embeddings of words and phrases). Despite the fact that in both cases we get word embeddings as a result of the training, StarSpace embeddings are trained using some supervised data, e.g. a set of similar sentence pairs, and thus they can better suit the task.\\nIn our case, StarSpace should use two types of sentence pairs for training: \\\"positive\\\" and \\\"negative\\\". \\\"Positive\\\" examples are extracted from the train sample (duplicates, high similarity) and the \\\"negative\\\" examples are generated randomly (low similarity assumed). \\nHow to choose the best params for the model?\\nNormally, you would start with some default choice and then run extensive experiments to compare different strategies. However, we have some recommendations ready for you to save your time:\\n- Be careful with choosing the suitable training mode. In this task we want to explore texts similarity which corresponds to trainMode = 3.\\n- Use adagrad optimization (parameter adagrad = true).\\n- Set the length of phrase equal to 1 (parameter ngrams), because we need embeddings only for words.\\n- Don't use a large number of epochs (we think that 5 should be enough).\\n- Try dimension dim equal to 100.\\n- To compare embeddings usually cosine similarity is used.\\n- Set minCount greater than 1 (for example, 2) if you don't want to get embeddings for extremely rare words.\\n- Parameter verbose = true could show you the progress of the training...\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n######### TRAINING HAPPENING HERE #############\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Both clusters have mixed content, although cluster 1 is much purer than cluster 0:\\n* Cluster 0: artists, songwriters, professors, politicians, writers, etc.\\n* Cluster 1: baseball players, hockey players, soccer (association football) players, etc.\\nTop words of cluster 1 are all related to sports, whereas top words of cluster 0 show no clear pattern.\\nRoughly speaking, the entire dataset was divided into athletes and non-athletes. It would be better if we sub-divided non-atheletes into more categories. So let us use more clusters. How about K=10?\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nk = 10\\nvisualize_document_clusters(wiki, tf_idf, centroids[k](), cluster_assignment[k](), k, map_index_to_word)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"2) Deffered Acceptance\\n\u30de\u30c3\u30c1\u30f3\u30b0\u904e\u7a0b\\nM0, M1\u306fF0\u306b\u30d7\u30ed\u30dd\u30fc\u30ba\u3057\u3001M2\u306fF2\u306b\u30d7\u30ed\u30dd\u30fc\u30ba\u3092\u3059\u308b\u3002\\n\u3053\u306e\u3068\u304d\u3001M2\u306f\u8ab0\u3068\u3082\u7af6\u5408\u305b\u305aF2\u3068\u4ed8\u304d\u5408\u3046\u3053\u3068\u304c\u51fa\u6765\u308b\u304c\u3001M0\u3068M1\u306f\u7af6\u5408\u3059\u308b\u3002F0\u304b\u3089\u3059\u308b\u3068\u3001M1\u3088\u308aM0\u306e\u65b9\u304c\u597d\u307f\u306a\u306e\u3067\u3001M1\u3092\u65ad\u308aM0\u3068\u4ed8\u304d\u5408\u3046\u3068\u3044\u3046\u9078\u629e\u3092\u884c\u3046\u3002\\n\u6b8b\u3063\u305fM1\u306f\u7b2c\u4e8c\u5e0c\u671b\u306eF2\u306b\u30d7\u30ed\u30dd\u30fc\u30ba\u3092\u3059\u308b\u3002F2\u306fM2\u304b\u3089\u30d7\u30ed\u30dd\u30fc\u30ba\u3092\u53d7\u3051\u3066\u4ed8\u304d\u5408\u304a\u3046\u3068\u3057\u3066\u3044\u305f\u304c\u3001M2\u3088\u308aM1\u306e\u65b9\u304c\u597d\u307f\u306a\u306e\u3067\u3001M2\u3092\u65ad\u308aM1\u3068\u4ed8\u304d\u5408\u3046\u3068\u3044\u3046\u9078\u629e\u3092\u884c\u3046\u3002\\n\u6b8b\u3063\u305fM2\u306f\u7b2c\u4e8c\u5e0c\u671b\u306eF0\u306b\u30d7\u30ed\u30dd\u30fc\u30ba\u3092\u3059\u308b\u3002F0\u306fM0\u304b\u3089\u30d7\u30ed\u30dd\u30fc\u30ba\u3092\u53d7\u3051\u3066\u4ed8\u304d\u5408\u304a\u3046\u3068\u3057\u3066\u3044\u305f\u304c\u3001M0\u3088\u308aM2\u306e\u65b9\u304c\u597d\u307f\u306a\u306e\u3067\u3001M0\u3092\u65ad\u308aM2\u3068\u4ed8\u304d\u5408\u3046\u3068\u3044\u3046\u9078\u629e\u3092\u884c\u3046\u3002 \\n\u6b8b\u3063\u305fM0\u306f\u7b2c\u4e8c\u5e0c\u671b\u306eF1\u306b\u30d7\u30ed\u30dd\u30fc\u30ba\u3092\u3057\u3066\u3001\u8ab0\u3068\u3082\u7af6\u5408\u305b\u305a\u306b\u4ed8\u304d\u5408\u3046\u3002\\n\u30de\u30c3\u30c1\u30f3\u30b0\u7d50\u679c\\nM0 - F1\\nM1 - F2\\nM2 - F0\\n\u3053\u306e\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3067\u306f\u99c6\u3051\u843d\u3061\u3059\u308b\u30da\u30a2\u306f\u5b58\u5728\u305b\u305a\u300c\u5b89\u5b9a\u300d\u306a\u30de\u30c3\u30c1\u30f3\u30b0\u304c\u5b9f\u73fe\u3055\u308c\u3001\u4e0a\u8a18\u306e\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3067\u751f\u3058\u305f\u554f\u984c\u3092\u56de\u907f\u3059\u308b\u3053\u3068\u304c\u51fa\u6765\u305f\u3002\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nprop_matched, resp_matched = mf.DA(prop_prefs, resp_prefs)\\n\\nprop_matched\\n\\nresp_matched\\n\\nmf.Graph(prop_matched, resp_matched)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"(d)\\nThe rotor frequency is:\\n$$f_{re} = \\\\frac{p\\\\cdot n_\\\\text{slip}}{120}$$\\n\",\"targets\":\"fre = n_slip*p \\/ 120\\nprint('''\\nfre = {:.2f} Hz\\n============='''.format(fre))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Cleanin' the data\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n# Fill in blank hashtags\\ndf = df.where((pd.notnull(df)), np.nan)\\ndf[\\\"hashtags\\\"].fillna('')\\n\\n# Add some date\\/time things\\ndf[\\\"created_at\\\"] = pd.to_datetime(df[\\\"created_at\\\"], errors='coerce')\\n\\ndf[\\\"day_of_week\\\"] = df[\\\"created_at\\\"].apply(lambda x: x.weekday())\\ndf[\\\"day_of_month\\\"] = df[\\\"created_at\\\"].apply(lambda x: x.day)\\ndf[\\\"month\\\"] = df[\\\"created_at\\\"].apply(lambda x: x.month)\\ndf[\\\"time_of_day\\\"] = df[\\\"created_at\\\"].apply(lambda x: get_time_of_day(x))\\n\\ntod_Dummy = pd.get_dummies(df['time_of_day'])\\nprint(tod_Dummy.head(5))\\nprint tod_Dummy.count()\\n# del tod_Dummy['shutdown']\\n\\n# df['in_reply_to_screen_name'].fillna(-1)\\n# df['in_reply_to_status_id'].fillna(-1)\\n# df['in_reply_to_user_id'].fillna(-1)\\n# df['retweeted_status'].fillna(-1)\\n# df['retweeted'].fillna(-1)\\ndf['retweet_count'].fillna(np.nan)\\ndf['favorite_count'].fillna(np.nan)\\ndf[\\\"hashtags\\\"].fillna(np.nan)\\ndf[\\\"hashtags\\\"] = df[\\\"hashtags\\\"].apply(lambda x: str(x)[1:-1])\\ndf.loc[df[\\\"hashtags\\\"]=='a',\\\"hashtags\\\"] = ''\\nlist(df.columns.values)\\n\\n#Potentially remove, just cleaning for analysis sake\\ndel df['Unnamed: 0']\\ndel df['truncated']\\ndel df['user_mentions']\\ndel df['urls']\\ndel df['source']\\ndel df['lang']\\ndel df['place']\\ndel df['favorited']\\ndel df['media']\\ndel df['user']\\n\\n# More likely to remove\\ndel df['in_reply_to_status_id']\\ndel df['in_reply_to_user_id']\\ndel df['retweeted']\\ndel df['retweeted_status']\\nlen(df)\\n\\ndf.plot(x='created_at', y='day_of_week', kind='hist')\\n# fdf = df[[\\\"created_at\\\",\\\"id\\\",\\\"text\\\",\\\"hashtags\\\"]]\\n# str(fdf\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"7) What was the temperature in Central Park on Christmas Day, 1980? How about 1990? 2000?\\n\",\"targets\":\"response = requests.get(\\\"https:\\/\\/api.forecast.io\\/forecast\\/e554f37a8164ce189acd210d00a452e0\\/40.781750,-73.966641,346593600\\\")\\nweather_data = response.json()\\nweather_data.keys()\\n\\n#print(weather_data['daily']['data'][0])\\n\\nfor Christmas in weather_data['daily']['data']:\\n Convert_Celsius = (Christmas['temperatureMax'] - 32) * 5 \\/ 9\\n print(\\\"The maximum temperature on Christmas Day 1980 in Centralpark was\\\", \\n Christmas['temperatureMax'], \\\"Fahrenheit, or\\\", round(Convert_Celsius), \\\"degrees Celsius.\\\")\\n\\nresponse = requests.get(\\\"https:\\/\\/api.forecast.io\\/forecast\\/e554f37a8164ce189acd210d00a452e0\\/40.781750,-73.966641,662126400\\\")\\nweather_data_1990 = response.json()\\nweather_data_1990.keys()\\n\\nresponse = requests.get(\\\"https:\\/\\/api.forecast.io\\/forecast\\/e554f37a8164ce189acd210d00a452e0\\/40.781750,-73.966641,977745600\\\")\\nweather_data_2000 = response.json()\\nweather_data_2000.keys()\\n\\nfor Christmas in weather_data_1990['daily']['data']:\\n Convert_Celsius = (Christmas['temperatureMax'] - 32) * 5 \\/ 9\\n print(\\\"The maximum temperature on Christmas Day 1990 in Centralpark was\\\", \\n Christmas['temperatureMax'], \\\"Fahrenheit, or\\\", round(Convert_Celsius), \\\"degrees Celsius.\\\")\\n\\nfor Christmas in weather_data_2000['daily']['data']:\\n Convert_Celsius = (Christmas['temperatureMax'] - 32) * 5 \\/ 9\\n print(\\\"The maximum temperature on Christmas Day 1980 in Centralpark was\\\", \\n Christmas['temperatureMax'], \\\"Fahrenheit, or\\\", round(Convert_Celsius), \\\"degrees Celsius.\\\")\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"6) What's the weather looking like for the rest of today in Miami, Florida? I'd like to know the temperature for every hour, and if it's going to have cloud cover of more than 0.5 say \\\"{temperature} and cloudy\\\" instead of just the temperature.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nimport requests\\nresponse = requests.get(\\\"https:\\/\\/api.forecast.io\\/forecast\\/c9d64e80aa02ca113562a075e57256d7\\/25.7742700,-80.1936600\\\")\\nmiami_forecast = response.json()\\nprint(forecast_miami.keys())\\n\\nprint(type(miami_forecast['hourly']))\\nprint(miami_forecast['hourly'].keys())\\nprint(miami_forecast['hourly']['data'][0])\\n\\nmiami_forecast_today = miami_forecast['hourly']['data']\\n\\nprint(\\\"Here is an estimation of how the weather in Miami will look within the next few hours\\\")\\nfor item in forecast_miami_today:\\n if item['cloudCover'] > 0.5:\\n print(\\\"It will be\\\", item['temperature'],\\\"degrees Farenheit and overcast.\\\")\\n else:\\n print(\\\"It will be\\\", item['temperature'], \\\"degrees Farenheit\\\")\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Although the classes are distinct in the aggregate, where the median carapace (outer shell) length is around 36 mm for the lobsters that died and 42 mm for those that survived, they are not cleanly separable.\\nNotation\\nTo develop some intuition and a method, let's now turn to a more general setting and work on synthetic data sets.\\nLet the data consist of $m$ data points, where each point is $d$-dimensional. Each dimension corresponds to some continuously-valued predictor. In addition, each data point will have a binary label, whose value is either 0 or 1.\\nDenote each point by an augumented vector, $x_i$, such that\\n$$\\n\\\\begin{array}{rcl}\\n x_i\\n & \\\\equiv &\\n \\\\left(\\\\begin{array}{c}\\n 1 \\\\\\n x_{i,1} \\\\\\n x_{i,2} \\\\\\n \\\\vdots \\\\\\n x_{i,d}\\n \\\\end{array}\\\\right)\\n .\\n\\\\end{array}\\n$$\\nThat is, the point is the $d$ coordinates augmented by an initial dummy coordinate whose value is 1. This convention is similar to what we did in linear regression.\\nWe can also stack these points as rows of a matrix, $X$, again, just as we did in regression:\\n$$\\n\\\\begin{array}{rcl}\\n X \\\\equiv\\n \\\\left(\\\\begin{array}{c}\\n x_0^T \\\\\\n x_1^T \\\\\\n \\\\vdots \\\\\\n x_{m-1}^T\\n \\\\end{array}\\\\right)\\n & = &\\n \\\\left(\\\\begin{array}{ccccc}\\n 1 & x_{0,1} & x_{0,2} & \\\\cdots & x_{0,d} \\\\\\n 1 & x_{1,1} & x_{1,2} & \\\\cdots & x_{1,d} \\\\\\n & & & \\\\vdots & \\\\\\n 1 & x_{m-1,1} & x_{m-1,2} & \\\\cdots & x_{m-1,d} \\\\\\n \\\\end{array}\\\\right).\\n\\\\end{array}\\n$$\\nWe will take the labels to be a binary column vector, $l \\\\equiv \\\\left(l_0, l_1, \\\\ldots, l_{m-1}\\\\right)^T$.\\nAn example\\nWe've pre-generated a synethetic data set consisting of labeled data points. Let's download and inspect it, first as a table and then visually.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\ndf = pd.read_csv ('http:\\/\\/vuduc.org\\/cse6040\\/logreg_points_train.csv')\\n\\ndisplay (df.head ())\\nprint \\\"...\\\"\\ndisplay (df.tail ())\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# Simple Cross-Validation: Split the data set into training and test data\\nfeature_matrix_train, feature_matrix_test, target_train, target_test = model_selection.train_test_split(feature_matrix, target, test_size=0.25, random_state=33)\\n\\nfeature_matrix_train.count()\\n\\nfeature_matrix_test.count()\\n\\ntarget_train.head()\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nStep 2: Simple Cross-Validation\\nTasks:\\n- split your feature matrix X and target vector into train and test subsets using sklearn model_selection.train_test_split\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Averaged across 999 simulation runs (plus the original), the average average conditional entropy is notably higher than the true average conditional entropy of 0.116 bits:\\n\",\"targets\":\"boot.mean()\\n\\nlen(boot),boot.min()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# Kasus uji - jangan diubah!\\np_emit = pd.DataFrame({\\n 's1': {'A': 0.25, 'C': 0.25, 'G': 0.25, 'T': 0.25},\\n 's2': {'A': 0.10, 'C': 0.40, 'G': 0.40, 'T': 0.10}\\n}).T\\np_trans = pd.DataFrame({\\n 's1': {'s1': 0.89, 's2': 0.10},\\n 's2': {'s1': 0.1, 's2': 0.9}\\n}).T\\nmdl = HMM(\\n p_start=np.array([0.5, 0.5]),\\n p_trans=p_trans,\\n p_emit=p_emit,\\n p_stop=np.array([0.01, 0.])\\n)\\n\\nseq = list('CGACTACTGACTACTCGCCGACGCGACTGCCGTCTATACTGCGCATACGGC')\\nhmm_predictions = mdl.predict(seq)\\nprint(\\\"sequence: {}\\\".format(''.join(seq)))\\nprint(\\\"hmm pred: {}\\\".format(''.join(map(str, hmm_predictions))))\\n\\nassert ''.join(map(str, hmm_predictions)) == \\\"000000000000000111111111111111100000000000000000000\\\"\\nassert np.allclose(mdl.forward_backward(seq).sum(axis=1), np.ones(len(seq)))\\nassert np.isclose(mdl.log_probability(seq), -76.1486)\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nContoh kasus di bawah ini diadaptasi dari sini. Contoh ini adalah penggunaan HMM untuk mencari sequence DNA yang banyak mengandung nukleotida CG. Contoh kasus ini merupakan penyederhanaan dari kasus biologi yang riil dalam DNA sequencing.\\nAnda dapat menggunakan kode di bawah ini untuk menguji hasil implementasi Anda. Kalau tidak ada pesan error, berarti implementasi Anda sudah benar.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# Create the model from a PSSTCase, optionally passing a sel_bus\\nm = NetworkModel(case, sel_bus='Bus1')\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nI. Creating a NetworkModel\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"%matplotlib inline\\nfrom sympy.plotting import plot3d\\n\\nplot(sin(x), (x, -pi, pi));\\n\\nmonkey_saddle = x**3 - 3*x*y**2\\np = plot3d(monkey_saddle, (x, -2, 2), (y, -2, 2))\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nPlotting\\nWe can make 2D and 3D plots\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"At first sight we can see a small pattern, there are some peaks at the same distance. Looking more closely:\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nweekdays = []\\nfor date in users.date_account_created:\\n weekdays.append(date.weekday())\\nweekdays = pd.Series(weekdays)\\n\\nsns.barplot(x = weekdays.value_counts().index, y=weekdays.value_counts().values, order=range(0,7))\\nplt.xlabel('Week Day')\\nsns.despine()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Code\\/notebooks\\/bootcamp_advgraphics_seaborn.ipynb\\\".\\nThe first task is:\\nDatasets\\nThere are some classical datasets that get used to demonstrate different types of plots. We will use several of them here.\\n\\ntips : This dataset has informaiton on waiter tips. Includes information such as total amount of the bill, tip amount, sex of waiter, what day of the week, which meal, and party size.\\nanscombe: This dataset is a contrived example. It has 4 examples which differ drastically when you look at them, but they have the same correlation, regression coefficient, and $R^2$.\\ntitanic : This dataset has information on each of the passengers who were on the titanic. Includes information such as: sex, age, ticket class, fare paid, whether they were alone, and more.\\nCan you write Python code for it?\\n\",\"targets\":\"\\ntips = sns.load_dataset(\\\"tips\\\")\\nansc = sns.load_dataset(\\\"anscombe\\\")\\ntita = sns.load_dataset(\\\"titanic\\\")\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"NguyenPhuc_Ecommerce+Purchases+Exercise+_.ipynb\\\".\\nThe first task is:\\nHow many people have English 'en' as their Language of choice on the website?\\nCan you write Python code for it?\\n\",\"targets\":\"\\ndata[data['Language'] == 'en'].count()[0]\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\nnewaxis\\n\\n\ucc28\uc6d0 \ud655\uc7a5\\n\",\"targets\":\"a = np.arange(4)\\na, a.shape\\n\\nb = np.arange(4).reshape(4,1)\\nb, b.shape\\n\\nc = np.arange(4)[:, np.newaxis]\\nc\\n\\nc.shape\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Aproveitando o m\u00e9todo __call__ que definimos podemos sortear chamado o objeto tombola sem chamar o m\u00e9todo tombola.sorteia():\\n\",\"targets\":\"tombola()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"1. Load data into a Pandas dataframe\\n\",\"targets\":\"import pandas as pd # Dataframe package\\nfilename = '.\\/burrito_bootcamp.csv'\\ndf = pd.read_csv(filename)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"
    \\n
    \\nAfter this decomposition of our square matrix into eigenvectors and eigenvalues, let us briefly recapitulate how we can interpret those results. As we remember from our first linear algebra class in high school or college, both eigenvectors and eigenvalues are providing us with information about the distortion of a linear transformation: The eigenvectors are basically the direction of this distortion, and the eigenvalues are the scaling factor for the eigenvectors that describing the magnitude of the distortion. \\nIf we are performing the LDA for dimensionality reduction, the eigenvectors are important since they will form the new axes of our new feature subspace; the associated eigenvalues are of particular interest since they will tell us how \\\"informative\\\" the new \\\"axes\\\" are. \\nLet us briefly double-check our calculation and talk more about the eigenvalues in the next section.\\n
    \\n
    \\nChecking the eigenvector-eigenvalue calculation\\n[back to top]\\nA quick check that the eigenvector-eigenvalue calculation is correct and satisfy the equation:\\n$\\\\pmb A\\\\pmb{v} = \\\\lambda\\\\pmb{v}$ \\n
    \\nwhere\\n$\\\\pmb A = S_{W}^{-1}S_B\\\\\\n\\\\pmb{v} = \\\\; \\\\text{Eigenvector}\\\\\\n\\\\lambda = \\\\; \\\\text{Eigenvalue}$\\n\",\"targets\":\"for i in range(len(eig_vals)):\\n eigv = eig_vecs[:,i].reshape(4,1) \\n np.testing.assert_array_almost_equal(np.linalg.inv(S_W).dot(S_B).dot(eigv), \\n eig_vals[i] * eigv, \\n decimal=6, err_msg='', verbose=True)\\nprint('ok')\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"1<\\/b> DataFrame (~ programmable Excel worksheet), 6<\\/b> Series (= columns), 1128819<\\/b> rows (= entries)\\nWe convert the text with a time to a real timestamp object.\\n\",\"targets\":\"log['timestamp'] = pd.to_datetime(log['timestamp'])\\nlog.head()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Change pie dimensions\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n# As of now, the radius sizes are absolute, in pixels\\nwith pie1.hold_sync():\\n pie1.radius = 150\\n pie1.inner_radius = 100\\n\\n# Angles are in radians, 0 being the top vertical\\nwith pie1.hold_sync():\\n pie1.start_angle = -180\\n pie1.end_angle = 180\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Intrusion+detection.ipynb\\\".\\nThe first task is:\\nCheck that it is now an array with 3719 samples and 2 transfer values per sample.\\nCan you write Python code for it?\\n\",\"targets\":\"\\ntransfer_values_reduced.shape\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Contact map for a trajectory\\nIf you provide a single frame, the values in the contact map are 0 or 1: either the contact was present or it was not. But if you provide a longer trajectory, the values in the contact map are the fraction of frames during which the contact was present.\\nOther than using the entire trajectory, the process is exactly the same as for the single-frame example.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n%%time\\ntraj_contacts = ContactFrequency(traj)\\n\\n%%time\\nfig, ax = traj_contacts.residue_contacts.plot()\\nplt.xlabel(\\\"Residue\\\")\\n_ = plt.ylabel(\\\"Residue\\\")\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Investigate Data.ipynb\\\".\\nThe first task is:\\nAnalyse data\\nMain Table: Master\\nCan you write Python code for it?\\n\",\"targets\":\"\\ndataframes['Master'].head()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"LectureProgrammingParadigms\\/Lecture Programming Paradigms.ipynb\\\".\\nThe first task is:\\nThe procedural style relies on procedure calls to create modularized code.\\nIt seeks to simplify the application code by creating small pieces that a developer can view easily. \\nEven though the procedural coding style is an older form of application development, it\u2019s still a viable approach when a task lends itself to step-by-step execution.\\n\\nFunctional: Any form of state or mutable data are avoided. The main advantage of this approach is that there aren\u2019t any side effects to consider. In addition, this coding style lends itself well to parallel processing because there is no state to consider.\\nCan you write Python code for it?\\n\",\"targets\":\"\\nimport functools\\nmy_list = [1, 2, 3, 4, 5]\\ndef add(x, y):\\n return (x + y)\\nsum = functools.reduce(add, my_list) \\n# calculates add(add(add(add(1, 2), 3), 4), 5)\\n\\nprint(sum)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"%%cython --cplus --compile-args=-std=c++11 -I $fwdpy_includes -I $fwdpp_includes -l sequence -l gsl -l gslcblas\\n#Import all Cython symbols defined\\n#in fwdpy's main module\\nfrom fwdpy.fwdpy cimport *\\nimport numpy as np\\n#Now, we define a C++ function that:\\n#1. Takes the C++ representation as an argument\\n#2. Returns a C++ vector of unsigned integers\\ncdef vector[unsigned] sfs_cpp(const singlepop_t * pop):\\n #declare our return value.\\n #This is a standard C++ vector.\\n #The C++ vector is imported as a \\n #side-effect of cimporting fwdpp's\\n #Cython API\\n cdef vector[unsigned] rv\\n #For a population of N diploids,\\n #there are N bins in the SFS \\n #(including fixations, which\\n #we don't deal with here).\\n #So we initialize the return\\n #value to 2N zeroes\\n rv.resize(2*pop.N,0)\\n \\n #i is a dummy variable\\n cdef size_t i = 0\\n #A population contains a \\n #vector[unsigned] that represents\\n #the count (no. occurrences) of\\n #every mutation. Warning: it also\\n #conatains mutations with a count of\\n #0 (zero) because fwdpp internally\\n #puts new variants in those spaces...\\n for i in range(pop.mcounts.size()):\\n #...so we check that\\n #a mutation's count\\n #is nonzero...\\n if pop.mcounts[i]>0:\\n #...and increment our return value\\n #accordingly.\\n rv[pop.mcounts[i]-1]+=1\\n #Return the SFS to Python\\n return rv\\n\\ndef sfs(Spop pop):\\n \\\"\\\"\\\"\\n This is the Python function that will return the \\n SFS for a fwdpy.Spop object.\\n \\n Note that we can specify the argument type in the\\n \\\"def\\\" line. \\n \\n This docstring can be processed by Sphinx, and so\\n we use Sphinx grammar for documenting the params,\\n and we make sure to provide a link to the documentation\\n of the parameter's expected type:\\n \\n :param pop: A :class:`fwdpy.fwdpy.Spop`\\n \\n :return: The site-frequency spectrum for pop\\n \\n :rtype: numpy.array with dtype numpy.uint32\\n \\\"\\\"\\\"\\n #Here, we call our Cython function.\\n ...\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nExample 1: the site-frequency spectrum of all mutations\\nThe first function that we will write will calculate the site-frequency-spectrum (SFS) of the entire population. We impose the following constraints to keep things simple:\\n\\nWe will only process single-deme objects (type fwdpy.Spop).\\n\\nOn to our code for the SFS:\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# Create a design matrix related to the optimal strategies.\\nX = []\\ny = []\\nseen_keys = {}\\nfor key, val in opt_actions.iteritems():\\n if key[:-1] in seen_keys:\\n # We've already seen this, continue.\\n continue\\n \\n alpha0 = float(key[0] + 1)\\n beta0 = float(key[1] + 1)\\n alpha1 = float(key[2] + 1)\\n beta1 = float(key[3] + 1)\\n \\n if alpha0 == alpha1 and beta0 == beta1:\\n # We're in a perfectly symmetric situtation, skip this then.\\n continue\\n \\n seen_keys = key[:-1]\\n \\n # Standard results for the Beta distribution.\\n # https:\\/\\/en.wikipedia.org\\/wiki\\/Beta_distribution\\n mean0 = alpha0\\/(alpha0 + beta0)\\n mean1 = alpha1\\/(alpha1 + beta1)\\n \\n std0 = np.sqrt(alpha0*beta0 \\/ (alpha0 + beta0 + 1)) \\/ (alpha0 + beta0)\\n std1 = np.sqrt(alpha1*beta1 \\/ (alpha1 + beta1 + 1)) \\/ (alpha1 + beta1)\\n \\n t = alpha0 + beta0 + alpha1 + beta1\\n X.append([mean0,mean1,std0,std1,t,1,alpha0 - 1,beta0 - 1,alpha1 - 1,beta1 - 1])\\n y.append(val)\\n \\nX = np.array(X)\\ny = np.array(y)\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nIt seems like my laptop can look ahead at least sixteen steps into the future without dying - pretty good!\\nOptimal versus UCB\\nLet's try and figure out how the optimal strategy relates to the upper confidence bound (UCB) heuristic. Let's train a logistic regression model with the same inputs as a UCB strategy - mean, standard deviation, time - and see how well it can approximate the optimal strategy.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# The reference camera position can just be the identity\\nreference_pose = tf.constant(\\n [[1.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0]])\\n\\n# Accurate intrinsics are only important if we are trying to match a ground\\n# truth output. Here we just give intrinsics for a 16:9 image with the\\n# principal point in the center.\\nintrinsics = tf.constant([1.0, 1.0 * 16\\/9, 0.5, 0.5])\\n\\ndef render(xoffset, yoffset, zoffset):\\n # The translation is the final column of the pose matrix\\n target_pose = tf.constant(\\n [[1.0, 0.0, 0.0, -xoffset],\\n [0.0, 1.0, 0.0, -yoffset],\\n [0.0, 0.0, 1.0, -zoffset]])\\n image = mpi.render(layers, depths,\\n reference_pose, intrinsics, # Reference view\\n target_pose, intrinsics, # Target view\\n height=512, width=910)\\n return image\\n\\n# First move the camera along the X axis (left to right):\\nfor i in range(5):\\n xoffset = (i - 2) * 0.05\\n plt.subplot(1, 5, i + 1)\\n plt.imshow(render(xoffset, 0.0, 0.0))\\n plt.title('xoff = %f' % xoffset)\\n plt.axis('off')\\nplt.show()\\n\\n# And next along the Z-axis (moving forwards):\\nfor i in range(5):\\n zoffset = (i - 2) * 0.15\\n plt.subplot(1, 5, i + 1)\\n plt.imshow(render(0.0, 0.0, zoffset))\\n plt.title('zoff = %f' % zoffset)\\n plt.axis('off')\\nplt.show()\\n\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nGenerate new views from nearby camera positions\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"As an illustration of the high-level nature of Pandas plots, we can split multiple series into subplots with a single argument for plot:\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nvariables.cumsum(0).plot(subplots=True)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"0.17\\/_downloads\\/fd79fe12dec0d8ba3f96e5d55db03054\\/plot_ecog.ipynb\\\".\\nThe first task is:\\nSometimes it is useful to make a scatterplot for the current figure view.\\nThis is best accomplished with matplotlib. We can capture an image of the\\ncurrent mayavi view, along with the xy position of each electrode, with the\\nsnapshot_brain_montage function.\\nCan you write Python code for it?\\n\",\"targets\":\"\\n# We'll once again plot the surface, then take a snapshot.\\nfig = plot_alignment(info, subject='sample', subjects_dir=subjects_dir,\\n surfaces='pial')\\nmlab.view(200, 70)\\nxy, im = snapshot_brain_montage(fig, mon)\\n\\n# Convert from a dictionary to array to plot\\nxy_pts = np.vstack(xy[ch] for ch in info['ch_names'])\\n\\n# Define an arbitrary \\\"activity\\\" pattern for viz\\nactivity = np.linspace(100, 200, xy_pts.shape[0])\\n\\n# This allows us to use matplotlib to create arbitrary 2d scatterplots\\n_, ax = plt.subplots(figsize=(10, 10))\\nax.imshow(im)\\nax.scatter(*xy_pts.T, c=activity, s=200, cmap='coolwarm')\\nax.set_axis_off()\\nplt.show()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Counter (python 2.7+)\\n\u7528\u5e8f\u5217\u751f\u6210Counter\u5bf9\u8c61\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\ns = 'abcdeabcdabcaba'\\nc = Counter(s)\\nprint c\\n\\nprint c.most_common(3)\\n\\nprint sorted(c)\\n\\nprint ''.join(sorted(c.elements()))\\n\\nprint c.values()\\n\\nprint c.elements()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# Get final accuracy of ensembled model\\nfrom sklearn.metrics import classification_report, accuracy_score\\n\\nfor individual_predictions in [rf_predictions, svm_predictions, lr_predictions]:\\n# classification_report(yTest.astype(int), individual_predictions.astype(int))\\n print \\\"Accuracy:\\\", round(accuracy_score(yTest.astype(int), individual_predictions.astype(int)),2)\\n\\n\\nprint classification_report(yTest.astype(int), ensembled_predictions.astype(int))\\nprint \\\"Ensemble Accuracy:\\\", round(accuracy_score(yTest.astype(int), ensembled_predictions.astype(int)),2)\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nAnd we could assess the performance of the majority voted predictions like so:\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Consider the following Ordinary Differential Equation (ODE2):\\n$\\\\frac{dy}{dx} = \\\\frac{1 - ycos(x)}{sin(x)}$, with $y(0.1) = \\\\frac{2.1}{sin(0,1)}$ and $x \\\\in [0.1,1]$\\nwe demand its punctual validity over a grid of $N$ equally spaced points.\\nNOTE: The solution to the ODE is $y = \\\\frac{x+2}{sin(x)}$\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n# We construct the grid of points. Since the ODE only contains first order derivatives we use truncation order 1.\\n# Since we are using vectorized gdual we can instantiate only one gdual\\n\\nvalues = np.linspace(0.1,1,10)\\ngrid = gdual(values, \\\"x\\\", 1)\\n\\n# We define the quadratic error of the dCGP in the grid points\\ndef qe_ODE2(dCGP, grid):\\n retval = 0\\n out = dCGP([grid])[0]\\n y = np.array(out.constant_cf)\\n dydx = np.array(out.get_derivative({\\\"dx\\\" : 1}))\\n x = np.array(grid.constant_cf)\\n ode2 = (1. - y * cos(x)) \\/ sin(x)\\n retval += (ode2 - dydx) * (ode2 - dydx)\\n return sum(retval)\\n\\n# We define a penalty term associated to the initial conditions violation\\ndummy = (2.1)\\/sin(0.1)\\ndef ic_ODE2(dCGP):\\n x0 = 0.1\\n y0 = dummy\\n out = dCGP([gdual([x0])])[0]\\n return (out.constant_cf[0] - y0) * (out.constant_cf[0] - y0)\\n\\n# We run nexp experiments to accumulate statistic for the ERT\\nnexp = 100\\nstop = 500\\noffsprings = 10\\nres = []\\nprint(\\\"restart: \\\\t gen: \\\\t expression:\\\")\\nfor i in range(nexp):\\n dCGP = expression(inputs=1, outputs=1, rows=1, cols=15, levels_back=16, arity=2, kernels=kernels, seed = randint(0,100000))\\n g, best_chromosome = run_experiment(max_gen = stop, offsprings = offsprings, \\\\\\n quadratic_error=qe_ODE2, initial_conditions_error=ic_ODE2, dCGP=dCGP)\\n res.append(g)\\n dCGP.set(best_chromosome)\\n if g < (stop-1):\\n print(i, \\\"\\\\t\\\\t\\\", res[i], \\\"\\\\t\\\", dCGP([\\\"x\\\"]), \\\" a.k.a \\\", dCGP.simplify([\\\"x\\\"]))\\nres = np.array(res)\\n\\nERT = sum(res) \\/ sum(res<(stop-1))\\nprint(\\\"ERT Expected run time - avg. number of function evaluations needed: \\\", ERT * offsprings)\\nprint(\\\"Avg. number of function evaluations from Tsoulos paper: \\\", 742 * 200)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# export \\ndef grad_overflow(param_groups):\\n for group in param_groups:\\n for p in group:\\n if p.grad is not None:\\n s = float(p.grad.data.float().sum())\\n if s == float('inf') or s == float('-inf') or s != s: return True\\n return False\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nSo we can use it in the following function that checks for gradient overflow:\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Data\\nWe will use the FashionMNIST dataset, which is of a similar format to MNIST but is richer and has more variance\\n\",\"targets\":\"batch_size = 512\\nctx = mx.gpu() if mx.context.num_gpus() else mx.cpu()\\n\\ntransform = lambda x,y: (x.transpose((2,0,1)).astype('float32')\\/255., y)\\n\\ntrain_dataset = gluon.data.vision.FashionMNIST(train=True)\\ntest_dataset = gluon.data.vision.FashionMNIST(train=False)\\n\\ntrain_dataset_t = train_dataset.transform(transform)\\ntest_dataset_t = test_dataset.transform(transform)\\n\\ntrain_data = gluon.data.DataLoader(train_dataset_t, batch_size=batch_size, last_batch='rollover', shuffle=True, num_workers=5)\\ntest_data = gluon.data.DataLoader(test_dataset_t, batch_size=batch_size, last_batch='rollover', shuffle=True, num_workers=5)\\n\\nplt.figure(figsize=(20,10))\\nfor i in range(10):\\n ax = plt.subplot(1, 10, i+1)\\n ax.imshow(train_dataset[i][0].squeeze().asnumpy(), cmap='gray')\\n ax.axis('off')\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# Saving a csv file with some text in it separated by spaces:\\n\\nimport csv\\nwith open('eggs.csv', 'w') as csvfile:\\n spamwriter = csv.writer(csvfile, delimiter=' ')\\n spamwriter.writerow(['Spam'] * 5 + ['Baked Beans'])\\n spamwriter.writerow(['Spam', 'Lovely Spam', 'Wonderful Spam'])\\n\\n# Loading the csv file and join the elements with commas instead of spaces:\\n\\nwith open('eggs.csv', 'r') as csvfile:\\n spamreader = csv.reader(csvfile, delimiter=' ')\\n for row in spamreader:\\n print (', '.join(row))\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nThe csv (Comma Separated Values) files are one of the most common when working with databases. As stated in its name, these format defines the sepparation between elements in the file by a delimiter, typically the comma. Nevertheless, as this files can be defined using any delimiter, it is recommendable to specify which one you would like to use to avoid errors.\\nIn particular, we are going to work with the functions which allow us to save and load data:\\n\\ncsv.writer([filename, delimiter]): creates the csv file with the specified filename.\\ncsv.reader([filename, delimiter]): loads the csv file with the specified filename.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Cross-validation\\nWe have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.\\n\",\"targets\":\"num_folds = 5\\nk_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]\\n\\nX_train_folds = []\\ny_train_folds = []\\n################################################################################\\n# TODO: #\\n# Split up the training data into folds. After splitting, X_train_folds and #\\n# y_train_folds should each be lists of length num_folds, where #\\n# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #\\n# Hint: Look up the numpy array_split function. #\\n################################################################################\\nnum_example_each_fold = X_train.shape[0] \\/ num_folds\\nX_train_folds = np.array_split(X_train, num_example_each_fold)\\ny_train_folds = np.array_split(y_train, num_example_each_fold)\\n\\n\\n################################################################################\\n# END OF YOUR CODE #\\n################################################################################\\n\\n# A dictionary holding the accuracies for different values of k that we find\\n# when running cross-validation. After running cross-validation,\\n# k_to_accuracies[k] should be a list of length num_folds giving the different\\n# accuracy values that we found when using that value of k.\\nk_to_accuracies = {}\\n\\n\\n################################################################################\\n# TODO: #\\n# Perform k-fold cross validation to find the best value of k. For each #\\n# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #\\n# where in each case you use all but one of the folds as training data and the #\\n# last fold as a validation set. Store the accuracies for all fold and all #\\n# values of k in the k_to_accuracies dictionary. ...\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"%%bash\\ndocker build -f dockerfiles\\/hello_node.docker -t gcr.io\\/${PROJECT_ID}\\/hello-node:v2 .\\ndocker push gcr.io\\/${PROJECT_ID}\\/hello-node:v2\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nA declarative approach is being used here. Rather than starting or stopping new instances, you declare how many instances should be running at all times. Kubernetes reconciliation loops makes sure that reality matches what you requested and takes action if needed.\\nHere's a diagram summarizing the state of your Kubernetes cluster:\\n\\nRoll out an upgrade to your service\\nAt some point the application that you've deployed to production will require bug fixes or additional features. Kubernetes helps you deploy a new version to production without impacting your users.\\nFirst, modify the application by opening server.js so that the response is \\nbash\\nresponse.end(\\\"Hello Kubernetes World!\\\");\\nNow you can build and publish a new container image to the registry with an incremented tag (v2 in this case).\\nNote: Building and pushing this updated image should be quicker since caching is being taken advantage of.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Constant Energy Simulation\\nHere we demonstrate some code to run a simulation at constant energy. We start off by setting up some parameters of the simulation.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nN = 5000\\ndimension = 2\\nbox_size = 80.0\\ndisplacement, shift = space.periodic(box_size)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"#\\\"US041867445\\\"\\nwith open(\\\"big3_position.csv\\\",\\\"w+\\\") as f:\\n f.write(\\\"{}\\\\t{}\\\\t{}\\\\t{}\\\\t{}\\\\t{}\\\\t{}\\\\t{}\\\\t{}\\\\t{}\\\\n\\\".\\n format(\\\"Company_name\\\",\\\"Company_ID\\\",\\\"Big3Share\\\",\\\"Position\\\",\\\"Revenue\\\",\\\"Assets\\\",\\\"Employees\\\",\\\"MarketCap\\\",\\\"Exchange\\\",\\\"TypeEnt\\\"))\\n for id,g in ownership.groupby(\\\"BvD ID number\\\"):\\n sum_big3 = g.loc[g[\\\"Shareholder - BvD ID number\\\"].isin(['US149144472L', 'US320174431', 'US042456637']),\\\"max\\\"].sum()\\n t = g.loc[g[\\\"Shareholder - BvD ID number\\\"] != \\\"US041867445\\\",:].sort_values(by=\\\"max\\\",ascending=False,na_position=\\\"last\\\")\\n if sum_big3 == 0: position = 100\\n else: position = 1\\n for i,values in t.iterrows():\\n if isinstance(values[\\\"Shareholder - BvD ID number\\\"],float): continue\\n if values.values[3][:2] != \\\"ZZ\\\":\\n if values.values[-1] >=sum_big3: position+=1\\n else: break\\n r,a,e,m,exchange,typeent = companies.loc[companies[\\\"BvD ID number\\\"] == values[\\\"BvD ID number\\\"],:].values[0][-6:]\\n #print(companies.loc[companies[\\\"BvD ID number\\\"] == values[\\\"BvD ID number\\\"],:].values[0])\\n \\n f.write(\\\"{}\\\\t{}\\\\t{}\\\\t{}\\\\t{}\\\\t{}\\\\t{}\\\\t{}\\\\t{}\\\\t{}\\\\n\\\".format(values[\\\"Company name\\\"],values[\\\"BvD ID number\\\"],sum_big3,position,r,a,e,m,exchange,typeent))\\n \\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nStep2: Create file with the sum of big3 and the largest shareholder\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Can mix integers and floats freely in operations.\\n\\nIntegers and floating-point numbers can be mixed in arithmetic.\\nPython automatically converts integers to floats as needed.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nprint('half is', 1 \\/ 2.0)\\nprint('three squared is', 3.0 ** 2)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"#@title \u691c\u51fa\u3057\u305f\u30aa\u30d6\u30b8\u30a7\u30af\u30c8\u3092\u30cf\u30a4\u30e9\u30a4\u30c8\u3059\u308b\u95a2\u6570 highlight_logos \u3092\u5b9a\u7fa9\\n\\nfrom PIL import Image, ImageDraw\\n\\ndef highlight_logos(image_file, objects):\\n image = Image.open(image_file)\\n draw = ImageDraw.Draw(image, \\\"RGBA\\\")\\n \\n for obj in objects:\\n vertex_lt = tuple(obj['boundingPoly']['vertices'][0].values())\\n vertex_rb = tuple(obj['boundingPoly']['vertices'][2].values())\\n \\n # bounding box\\n draw.rectangle(xy=(vertex_lt, vertex_rb), outline='red')\\n \\n # probability\\n obj['description']\\n draw.text(xy=(vertex_lt[0], vertex_lt[1]-10),\\n text=obj['description'] + ':' + str(format(obj['score'], '.3f')),\\n fill='red') \\n display(image)\\n\\nhighlight_logos('googleplex.jpg', response['responses'][0]['logoAnnotations'])\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\n\u691c\u51fa\u7d50\u679c\u3092\u53ef\u8996\u5316\u3057\u3088\u3046\\n\u3053\u3053\u3067\u3082\u3001\u691c\u51fa\u3057\u305f\u30ed\u30b4\u306e\u77e9\u5f62\u60c5\u5831\u304c\u5f97\u3089\u308c\u308b\u306e\u3067\u305d\u308c\u3092\u63cf\u753b\u3057\u3066\u307f\u307e\u3057\u3087\u3046\u3002\u6b21\u306e\u30bb\u30eb\u3092\u5b9f\u884c\u3059\u308b\u3068\u691c\u77e5\u3057\u305f\u30aa\u30d6\u30b8\u30a7\u30af\u30c8\u3092\u30cf\u30a4\u30e9\u30a4\u30c8\u3059\u308b\u4fbf\u5229\u95a2\u6570 highlight_logos \u304c\u5b9a\u7fa9\u3055\u308c\u307e\u3059\u3002\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"08 - Signal Processing - Scipy.ipynb\\\".\\nThe first task is:\\nThe amplitudes don't seem quite right - longer duration increases the signal to noise and gives a better result:\\nCan you write Python code for it?\\n\",\"targets\":\"\\n# Create signal\\nsr = 2000 # Sample rate\\ndur = 10 # Increased duration (s) (increasing\\/decreasing this changes S\\/N)\\nX = np.linspace(0, dur-1\\/sr, int(dur*sr)) # Time\\nY_s = amp1*np.sin(X*2*np.pi*frq1 - np.pi\\/4) + amp2*np.sin(X*2*np.pi*frq2 + np.pi\\/2)\\nY_sn = Y_s + 40*np.random.rand(len(X))\\n\\n# Determine Single Sided FT Spectrum\\nY_sn_fft = np.fft.fft(Y_sn)\\n\\n# Update ft output\\nFT = np.roll(Y_sn_fft, len(X)\\/\\/2) # Shift zero freq component to center of spectrum\\nSSFT_amp = np.abs(FT)[len(X)\\/\\/2:] # Use the absolute value for amplitude; spectrum is symmetric - start from zero freq component\\nSSFT_amp = 2\\/len(X) * SSFT_amp # Scale by 2 (using half the spectrum) \\/ number points\\n\\n# Determine frequencies\\nfreqs = sr\\/len(X)*np.arange(0,len(SSFT_amp))\\n\\n# Plot\\nplt.plot(freqs[1:], SSFT_amp[1:])\\nplt.title('Single-Sided Spectrum of Signal')\\nplt.xlabel('freq (Hz)')\\nplt.ylabel('Freq Amplitude')\\nplt.show()\\n\\n# Create signal\\nsr = 2000 # Sample rate\\ndur = 10 # Increased duration (s) (increasing\\/decreasing this changes S\\/N)\\nX = np.linspace(0, dur-1\\/sr, int(dur*sr)) # Time\\nY_s = amp1*np.cos(X*2*np.pi*frq1 - np.pi\\/4) + amp2*np.cos(X*2*np.pi*frq2 + np.pi\\/2)\\nY_sn = Y_s + 40*np.random.rand(len(X))\\n\\n# Determine Single Sided FT Spectrum\\nY_sn_fft = np.fft.fft(Y_sn)\\n\\n# Update ft output\\nFT = np.roll(Y_sn_fft, len(X)\\/\\/2) # Shift zero freq component to center of spectrum\\nSSFT_amp = np.abs(FT)[len(X)\\/\\/2:] # Use the absolute value for amplitude; spectrum is symmetric - start from zero freq component\\nSSFT_amp = 2\\/len(X) * SSFT_amp # Scale by 2 (using half the spectrum) \\/ number points\\n\\n# Determine frequencies\\nfreqs = sr\\/len(X)*np.arange(0,len(SSFT_amp))\\n\\n# Plot\\nplt.plot(freqs[1:], SSFT_amp[1:])\\nplt.title('Single-Sided Spectrum of Signal')\\nplt.xlabel('freq (Hz)')\\nplt.ylabel('Freq Amplitude')\\nplt.show()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# Convert the individual characters to sequences\\nsequences = # TODO 3: Your code goes here\\n\\nfor seq in sequences.take(1):\\n print(chars_from_ids(seq))\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nThe batch method lets you easily convert these individual characters to sequences of the desired size.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# Note: replace with the IP address of your data server\\npr_min_max(\\\"192.168.99.100\\\")\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nAnalyze the Data\\nLet's run the pr_min_max function on our dataset. Note that the IP address used below may differ from your data server deployment.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Create a dictionary to store the data from the table\\nThe dictionary format is such that the key is the university name, and the value is a list of strings indicating their annual endowment.\\n\",\"targets\":\"# Using `OrderedDict()` to have the legend ordered later on when plotting the results \\nunis = OrderedDict()\\nfor row in table.find_all('tr')[1:]:\\n data = row.text.split('\\\\n')\\n unis[data[1]] = [money.split('[')[0] for money in data[2:-1]]\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# Load dataset and create array of class names.\\nbuilder = tfds.builder('cifar100')\\nbuilder.download_and_prepare()\\nds_test = builder.as_dataset('test')\\ninfo = builder.info\\nclassnames = [\\n info.features['label'].int2str(id_)\\n for id_ in range(info.features['label'].num_classes)\\n]\\nclassnames[:10]\\n\\n# \\\"best prompts\\\" from CLIP paper (https:\\/\\/arxiv.org\\/abs\\/2103.00020)\\nPROMPTS = [\\n 'itap of a {}.',\\n 'a bad photo of the {}.',\\n 'a origami {}.',\\n 'a photo of the large {}.',\\n 'a {} in a video game.',\\n 'art of the {}.',\\n 'a photo of the small {}.',\\n '{}',\\n]\\n\\ntexts = [\\n prompt.format(classname)\\n for classname in classnames\\n for prompt in PROMPTS\\n]\\nlen(texts)\\n\\n# Tokenize the texts using numpy like before.\\ntokens = tokenizer(texts)\\ntokens.shape\\n\\n_, ztxt, _ = lit_model.apply(lit_variables, tokens=tokens)\\nztxt.shape\\n\\n# `pp` from above (section \\\"Load model\\\") is a TensorFlow graph that can\\n# efficiently be added to the input pre-processing.\\nimgs = next(iter(ds_test.map(pp).batch(4)))['image']\\n# Note that `pp` would also tokenize \\\"texts\\\" to \\\"tokens\\\", if such a feature was\\n# present in the dataset (which is not the case for cifar).\\n\\nplt.figure(figsize=(15, 4))\\nplt.imshow(np.hstack(imgs) * .5 + .5)\\nplt.axis('off');\\n\\n# JIT-compile image embedding function because there are lots of images.\\n@jax.jit\\ndef embed_images(variables, images):\\n zimg, _, _ = lit_model.apply(variables, images=images)\\n return zimg\\n\\n# Compute all images embeddings & collect correct labels.\\nzimgs = []\\nlabels = []\\n\\nfor batch in tqdm.tqdm(ds_test.map(lit_model.get_pp()).batch(500)):\\n labels += list(batch['label'].numpy())\\n zimg = embed_images(lit_variables, batch['image'].numpy())\\n zimgs.append(np.array(zimg))\\nzimgs = np.concatenate(zimgs)\\nzimgs.shape\\n\\n# Compute similarities ...\\nsims = zimgs @ ztxt.reshape([len(classnames), len(PROMPTS), -1]).mean(axis=1).T\\nsims.shape\\n\\n# ... and use most similar embedding to predict label.\\n(sims.argmax(axis=1) == np.array(labels)).mean()\\n# Expected accuracy for model \\\"LiT-B16B\\\" : 79.19\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\ntfds zero-shot evaluation\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"2.5.2 Avec $G_6$\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nprint(g6)\\nfor w in [ [], ['a', 'b'], ['a', 'a', 'a', 'b', 'b', 'b'], # True, True, True\\n ['a', 'a', 'a', 'a', 'b', 'b', 'b'], # True\\n ['a', 'a', 'a', 'a', 'a', 'a', 'a', 'a', 'a', 'a', 'b', 'b', 'b', 'b', 'b', 'b'], # True\\n ['a', 'b', 'a'], ['a', 'a', 'a', 'b', 'b', 'b', 'b'], # False, False\\n ['c'], ['a', 'a', 'a', 'c'], # False, False\\n ]:\\n testeMot(g6, w)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Analysis\\/Bidirectional connections as a function of distance.ipynb\\\".\\nThe first task is:\\nWe collect the intersomatic distances between recurently connected inhibitory neurons. We will plot\\nthem against the total number of possible bidirectionally connected neurons.\\nCan you write Python code for it?\\n\",\"targets\":\"\\ndist_found = list()\\nfor i in bidirectional_id:\\n dist_found += read_rec_dist( fname = mydataset.filename(i) )\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Antes de guardar el resultado como shape necesitamos renombrar las columnas de los a\u00f1os porque son enteros y no strings.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\ndes_estado = des_estado.rename(columns={2006:'2006',2007:'2007',2008:'2008',2009:'2009',2010:'2010',2011:'2011',2012:'2012',2013:'2013',2014:'2014'})\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"
    \\nDealing with the real world: moving stations\\n\",\"targets\":\"new_survey = gpd.GeoDataFrame.from_file('data\\/adjusted.shp')\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Create a new column named totalwgt_kg<\\/tt> that contains birth weight in kilograms. Compute its mean. Remember that when you create a new column, you have to use dictionary syntax, not dot notation.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\ndf['totalwgt_kg'] = 0.45359237 * df.totalwgt_lb\\ndf.totalwgt_kg.mean()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Classification\\/Week 2\\/Assignment 2\\/module-4-linear-classifier-regularization-assignment-blank.ipynb\\\".\\nThe first task is:\\nAre you running this notebook on an Amazon EC2 t2.micro instance? (If you are using your own machine, please skip this section)\\nIt has been reported that t2.micro instances do not provide sufficient power to complete the conversion in acceptable amount of time. For interest of time, please refrain from running get_numpy_data function. Instead, download the binary file containing the four NumPy arrays you'll need for the assignment. To load the arrays, run the following commands:\\narrays = np.load('module-4-assignment-numpy-arrays.npz')\\nfeature_matrix_train, sentiment_train = arrays['feature_matrix_train'], arrays['sentiment_train']\\nfeature_matrix_valid, sentiment_valid = arrays['feature_matrix_valid'], arrays['sentiment_valid']\\nBuilding on logistic regression with no L2 penalty assignment\\nLet us now build on Module 3 assignment. Recall from lecture that the link function for logistic regression can be defined as:\\n$$\\nP(y_i = +1 | \\\\mathbf{x}_i,\\\\mathbf{w}) = \\\\frac{1}{1 + \\\\exp(-\\\\mathbf{w}^T h(\\\\mathbf{x}_i))},\\n$$\\nwhere the feature vector $h(\\\\mathbf{x}_i)$ is given by the word counts of important_words in the review $\\\\mathbf{x}_i$. \\nWe will use the same code as in this past assignment to make probability predictions since this part is not affected by the L2 penalty. (Only the way in which the coefficients are learned is affected by the addition of a regularization term.)\\nCan you write Python code for it?\\n\",\"targets\":\"\\n'''\\nproduces probablistic estimate for P(y_i = +1 | x_i, w).\\nestimate ranges between 0 and 1.\\n'''\\ndef predict_probability(feature_matrix, coefficients):\\n # Take dot product of feature_matrix and coefficients \\n product = feature_matrix.dot(coefficients)\\n \\n # Compute P(y_i = +1 | x_i, w) using the link function\\n predictions = 1 \\/ (1 + np.exp(-product))\\n \\n return predictions\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"5. Run Experiment\\na. Define Experiment Function\\n\",\"targets\":\"def generate_experiment_fn(**experiment_args):\\n\\n def _experiment_fn(run_config, hparams):\\n\\n train_input_fn = lambda: csv_input_fn(\\n TRAIN_DATA_FILES_PATTERN,\\n mode = tf.estimator.ModeKeys.TRAIN,\\n num_epochs=hparams.num_epochs,\\n batch_size=hparams.batch_size\\n )\\n\\n eval_input_fn = lambda: csv_input_fn(\\n VALID_DATA_FILES_PATTERN,\\n mode=tf.estimator.ModeKeys.EVAL,\\n num_epochs=1,\\n batch_size=hparams.batch_size\\n )\\n\\n estimator = create_estimator(run_config, hparams)\\n\\n return tf.contrib.learn.Experiment(\\n estimator,\\n train_input_fn=train_input_fn,\\n eval_input_fn=eval_input_fn,\\n eval_steps=None,\\n **experiment_args\\n )\\n\\n return _experiment_fn\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Contents\\nAdd Video\\nIPython.display.YouTubeVideo lets you play Youtube video directly in the notebook. Library support is available to play Vimeo and local videos as well\\n\",\"targets\":\"from IPython.display import YouTubeVideo\\nYouTubeVideo('ooOLl4_H-IE')\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"02_Analysis_of_Twitter_Social_Network.ipynb\\\".\\nThe first task is:\\nExercise: do the same for the Graph comprising only retweet, replies, quote and mentions\\nPercolation of the Giant Component\\nCan you write Python code for it?\\n\",\"targets\":\"\\nimport random\\n\\ndef getGCsize(G):\\n \\\"\\\"\\\" returns the size of the largest component of G\\\"\\\"\\\"\\n \\n comps = nx.connected_components(G)\\n return max([len(comp) for comp in comps])\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"2.0\\/tutorials\\/compute.ipynb\\\".\\nThe first task is:\\nWe will now have three new sets of synthetics which can be compared, plotted, or removed.\\nCan you write Python code for it?\\n\",\"targets\":\"\\nb.models\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"And here's the Pmf that represents the sum of three dice.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nthrice = twice + d6\\nthinkplot.Pdf(d6)\\nthinkplot.Pdf(twice, color='green')\\nthinkplot.Pdf(thrice, color='red')\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"About $2$ seconds to find correctly the first $5671$ digits? That's slow! But hey, it's Python (dynamic typing etc).\\n\\nOther methods\\nTrigonometric methods (hard)\\nSome methods are based on the $\\\\mathrm{arccot}$ or $\\\\arctan$ functions, and use the appropriate Taylor series to approximate these functions.\\nThe best example is Machin's formula:\\n$$\\\\pi = 16 \\\\;\\\\mathrm{arccot}(5) - 4 \\\\;\\\\mathrm{arccot}(239).$$\\nAnd we use the Taylor series to approximate $\\\\mathrm{arccot}(x)$:\\n$$\\\\mathrm{arccot}(x) = \\\\frac{1}{x} - \\\\frac{1}{3x^3} + \\\\frac{1}{5x^5} - \\\\frac{1}{7x^7} + \\\\dots = \\\\sum_{n=0}^{+\\\\infty} \\\\frac{(-1)^n}{(2n+1) x^{2n+1}} .$$\\nThis method is also explained here with some details.\\nIn order to obtain $n$ digits, we will use fixed-point arithmetic to compute $\\\\pi \\\\times 10^n$ as a Python long integer.\\nHigh-precision arccot computation\\nTo calculate $\\\\mathrm{arccot}$ of an argument $x$, we start by dividing the number $1$ (represented by $10^n$, which we provide as the argument unity) by $x$ to obtain the first term.\\nWe then repeatedly divide by $x^2$ and a counter value that runs over $3$, $5$, $7$ etc, to obtain each next term.\\nThe summation is stopped at the first zero term, which in this fixed-point representation corresponds to a real value less than $10^{-n}$:\\npython\\ndef arccot(x, unity):\\n xpower = unity \\/ x\\n sum = xpower\\n n = 3\\n sign = -1\\n while True:\\n xpower = xpower \\/ (x*x)\\n term = xpower \\/ n\\n if term == 0:\\n break # we are done\\n sum += sign * term\\n sign = -sign\\n n += 2\\n return sum\\nAdapting it to use Decimal numbers is easy:\\n\",\"targets\":\"def arccot(x, unity):\\n \\\"\\\"\\\"Compute arccot(x) with a certain level of precision.\\\"\\\"\\\"\\n x = Decimal(x)\\n unity = Decimal(unity)\\n mysum = xpower = unity \\/ x\\n n = 3\\n sign = -1\\n while True:\\n xpower = xpower \\/ (x*x)\\n term = xpower \\/ n\\n if not term:\\n break\\n mysum += sign * term\\n sign = -sign # we alternate the sign\\n n += 2\\n return mysum\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"31.4. Background\\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\\nBackground BL mixing of momentum coefficient, (schema and value in m2\\/s - may by none)\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n# PROPERTY ID - DO NOT EDIT ! \\nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') \\n\\n# PROPERTY VALUE: \\n# Set as follows: DOC.set_value(\\\"value\\\") \\n# TODO - please enter value(s)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"As per the docstrings of PINNModel, we require two additional functions in the given format - \\n\\nCreate a gradient_fn which tells us about how to compute the gradients of the function-\\n```\\n\\n\\ndef gradient_fn(forward_fn, loss_outputs, initial_data):\\n def model_loss(params, target, weights, rng, ...):\\n # write code using the arguments.\\n # ... indicates the variable number of positional arguments.\\n return\\n return model_loss\\n```\\n\\n\\n\\n\\n\\nAnd to understand more about PINNModel, you can see that the same gradient_fn gets called in the code for computing the gradients.\\nFor our purpose, we have two variables $(x, t)$ and we need to tell the PINN Model how to compute the final gradient. For carrying out this process we will be using these main features from jax library for calculating the loss - \\n\\nvmap - This for parallelising computations in batches. We will process each row of the dataset, but it will get batched automatically using this feature. \\njacrev - This is used to calculate the jacobian matrix. In our case, the output is a single dimension and hence it can be thought of as the gradient function. We could directly use jax's grad function but using jacrev simplifies the array shapes and hence is easier.\\n\\nWe need to compute two losses for solving our differential equation-\\n\\nInitial Loss\\n\\nu_pred = forward_fn(params, rng, x_b, t_b)\\ninitial_loss = jnp.mean((u_pred - boundary_target) ** 2)\\n\\nRegulariser Loss\\n\\nThis is slightly complicated as we need to compute \\n$f(\\\\theta, x, t):= \\\\ \\\\ \\\\frac{\\\\partial NN(\\\\theta, x, t)}{\\\\partial t} + NN(\\\\theta, x, t)\\\\frac{\\\\partial NN(\\\\theta, x, t)}{\\\\partial x} - (0.01\\/\\\\pi)\\\\frac{\\\\partial^2 NN(\\\\theta, x, t)}{\\\\partial^2 x} $\\nThe partial derivative operation in the first and second terms can be calculated using jacrev function-\\nu_x, u_t = jacrev(forward_fn, argnums=(2, 3))(params, rng, x, t)\\nThe second partial derivative operation in the third term can be applying jacrev twice-\\nu_xx = jacrev(jacrev(forward_fn, argnums=2), argnums=2)(params, rng, x, t)\\n\",\"targets\":\"from jax import jacrev\\nimport functools\\n\\ndef gradient_fn(forward_fn, loss_outputs, initial_data):\\n \\\"\\\"\\\"\\n This function calls the gradient function, to implement the backpropogation\\n \\\"\\\"\\\"\\n boundary_data_x = initial_data['labeled_x']\\n boundary_data_t = initial_data['labeled_t']\\n\\n boundary_target = initial_data['labeled_u']\\n\\n @jax.jit\\n def model_loss(params, target, weights, rng, x_train, t_train):\\n\\n @functools.partial(jax.vmap, in_axes=(None, 0, 0))\\n def small_loss(params, x, t):\\n u = forward_fn(params, rng, x, t)\\n u_x, u_t = jacrev(forward_fn, argnums=(2, 3))(params, rng, x, t)\\n u_xx = jacrev(jacrev(forward_fn, argnums=2), argnums=2)(params, rng, x, t)\\n con = 0.01\\/np.pi\\n return u_t + u * u_x - con * u_xx\\n\\n u_pred = forward_fn(params, rng, boundary_data_x, boundary_data_t)\\n f_pred = small_loss(params, x_train, t_train)\\n loss_u = jnp.mean((u_pred - boundary_target) ** 2)\\n loss_f = jnp.mean((f_pred) ** 2)\\n\\n return loss_u + loss_f\\n\\n return model_loss\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"0.19\\/_downloads\\/2b9ae87368ee06cd9589fd87e1be1d30\\/plot_time_frequency_mixed_norm_inverse.ipynb\\\".\\nThe first task is:\\nCompute MxNE with time-frequency sparse prior\\nThe TF-MxNE solver is a distributed inverse method (like dSPM or sLORETA)\\nthat promotes focal (sparse) sources (such as dipole fitting techniques)\\n[1] [2]. The benefit of this approach is that:\\n\\nit is spatio-temporal without assuming stationarity (sources properties\\n can vary over time)\\nactivations are localized in space, time and frequency in one step.\\nwith a built-in filtering process based on a short time Fourier\\n transform (STFT), data does not need to be low passed (just high pass\\n to make the signals zero mean).\\nthe solver solves a convex optimization problem, hence cannot be\\n trapped in local minima.\\n\\nReferences\\n.. [1] A. Gramfort, D. Strohmeier, J. Haueisen, M. Hamalainen, M. Kowalski\\n \\\"Time-Frequency Mixed-Norm Estimates: Sparse M\\/EEG imaging with\\n non-stationary source activations\\\",\\n Neuroimage, Volume 70, pp. 410-422, 15 April 2013.\\n DOI: 10.1016\\/j.neuroimage.2012.12.051\\n.. [2] A. Gramfort, D. Strohmeier, J. Haueisen, M. Hamalainen, M. Kowalski\\n \\\"Functional Brain Imaging with M\\/EEG Using Structured Sparsity in\\n Time-Frequency Dictionaries\\\",\\n Proceedings Information Processing in Medical Imaging\\n Lecture Notes in Computer Science, Volume 6801\\/2011, pp. 600-611, 2011.\\n DOI: 10.1007\\/978-3-642-22092-0_49\\nCan you write Python code for it?\\n\",\"targets\":\"\\n# Author: Alexandre Gramfort \\n# Daniel Strohmeier \\n#\\n# License: BSD (3-clause)\\n\\nimport numpy as np\\n\\nimport mne\\nfrom mne.datasets import sample\\nfrom mne.minimum_norm import make_inverse_operator, apply_inverse\\nfrom mne.inverse_sparse import tf_mixed_norm, make_stc_from_dipoles\\nfrom mne.viz import (plot_sparse_source_estimates,\\n plot_dipole_locations, plot_dipole_amplitudes)\\n\\nprint(__doc__)\\n\\ndata_path = sample.data_path()\\nsubjects_dir = data_path + '\\/subjects'\\nfwd_fname = data_path + '\\/MEG\\/sample\\/sample_audvis-meg-eeg-oct-6-fwd.fif'\\nave_fname = data_path + '\\/MEG\\/sample\\/sample_audvis-no-filter-ave.fif'\\ncov_fname = data_path + '\\/MEG\\/sample\\/sample_audvis-shrunk-cov.fif'\\n\\n# Read noise covariance matrix\\ncov = mne.read_cov(cov_fname)\\n\\n# Handling average file\\ncondition = 'Left visual'\\nevoked = mne.read_evokeds(ave_fname, condition=condition, baseline=(None, 0))\\nevoked = mne.pick_channels_evoked(evoked)\\n# We make the window slightly larger than what you'll eventually be interested\\n# in ([-0.05, 0.3]) to avoid edge effects.\\nevoked.crop(tmin=-0.1, tmax=0.4)\\n\\n# Handling forward solution\\nforward = mne.read_forward_solution(fwd_fname)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"DXP statistics\\n\",\"targets\":\"from spectrocrunch.math.distributions import plothistogram\\nimport scipy.optimize\\n\\nxiaimage.onlyicrocr(False)\\n\\nstats = xiaimage.stats.astype(float)\\nNslow = stats[...,xiaimage.STEVT,0]\\nRreal = stats[...,xiaimage.STICR,0]\\nRslow = stats[...,xiaimage.STOCR,0]\\nLTfast = stats[...,xiaimage.STLT,0]\\/1000.\\nDTslow = stats[...,xiaimage.STDT,0]\\/100.\\n\\n# Dead time fraction\\nmaskDT100 = Rreal==0\\nwith np.errstate(divide='ignore', invalid='ignore'):\\n RT = Nslow\\/Rslow\\n DT = (Rreal-Rslow)\\/Rreal\\n LTslow = RT*Rslow\\/Rreal\\nDT[maskDT100] = 1\\nRT[maskDT100] = 0.1\\nLTslow[maskDT100] = 0\\nDTfast = 1-LTfast\\/RT\\nNfast = Rreal*LTfast\\nRfast = Nfast\\/RT\\n\\nnp.testing.assert_array_almost_equal(DTslow, DT, decimal=2) \\n\\n# Dead time per event\\nwith np.errstate(divide='ignore', invalid='ignore'):\\n tauslow = -np.log(LTslow\\/RT)\\/Rreal\\n taufast = -np.log(LTfast\\/RT)\\/Rreal\\nmask = DT<1\\ntauslow[~mask] = tauslow[mask].max()\\nmask = np.isfinite(taufast)\\ntaufast[~mask] = taufast[mask].max()\\n\\ntmp = plothistogram(*np.histogram(DTfast,bins=100),alpha=0.7,label='DTfast')\\ntmp = plothistogram(*np.histogram(DTslow,bins=100),alpha=0.7,label='DTslow')\\nax = plt.gca()\\ntmp = ax.set_xlabel('Dead time fraction')\\ntmp = ax.set_ylabel('Number of pixels')\\ntmp = ax.legend()\\n#tmp = ax.set_yscale('log')\\n\\nplt.figure()\\ntmp = plothistogram(*np.histogram(LTfast,bins=100),alpha=0.7,label='LTfast')\\ntmp = plothistogram(*np.histogram(LTslow,bins=100),alpha=0.7,label='LTslow')\\nax = plt.gca()\\ntmp = ax.set_xlabel('Time (msec)')\\ntmp = ax.set_ylabel('Number of pixels')\\ntmp = ax.legend()\\n\\nplt.figure()\\nmask = DTslow < 0.2\\ntmp = plothistogram(*np.histogram(tauslow[mask]*1e6,bins=100),alpha=0.7,label='Tau slow (DT<20%)')\\ntmp = plothistogram(*np.histogram(tauslow[~mask]*1e6,bins=100),alpha=0.7,label='Tau slow (DT>=20%)')\\ntmp = plothistogram(*np.histogram(taufast*1e6,bins=100),label='Tau fast')\\nax = plt.gca()\\ntmp = ax.set_xlabel('Tau (usec)')\\ntmp = ax.set_ylabel('Number of pixels')\\ntmp = ax.legend()\\n\\nplt.figure()\\ntmp = plt.plot([Rreal.min(), Rreal.max()], [Rreal.min(), Rreal.max()],...\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"06_image-classification\\/dlnd_image_classification.ipynb\\\".\\nThe first task is:\\nConvolution and Max Pooling Layer\\nConvolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:\\n* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.\\n* Apply a convolution to x_tensor using weight and conv_strides.\\n * We recommend you use same padding, but you're welcome to use any padding.\\n* Add bias\\n* Add a nonlinear activation to the convolution.\\n* Apply Max Pooling using pool_ksize and pool_strides.\\n * We recommend you use same padding, but you're welcome to use any padding.\\nNote: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.\\nCan you write Python code for it?\\n\",\"targets\":\"\\ndef conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):\\n \\\"\\\"\\\"\\n Apply convolution then max pooling to x_tensor\\n :param x_tensor: TensorFlow Tensor\\n :param conv_num_outputs: Number of outputs for the convolutional layer\\n :param conv_ksize: kernal size 2-D Tuple for the convolutional layer\\n :param conv_strides: Stride 2-D Tuple for convolution\\n :param pool_ksize: kernal size 2-D Tuple for pool\\n :param pool_strides: Stride 2-D Tuple for pool\\n : return: A tensor that represents convolution and max pooling of x_tensor\\n \\\"\\\"\\\"\\n num_units = (int(x_tensor.shape[2]) \\/ conv_strides[0] \\/ pool_strides[0])**2 * conv_num_outputs\\n deviation = 1\\/np.sqrt(num_units)\\n\\n W = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1],\\n int(x_tensor.shape[3]), conv_num_outputs],\\n mean=0.0, stddev=deviation))\\n b = tf.Variable(tf.truncated_normal([conv_num_outputs],\\n mean=0.0, stddev=deviation))\\n \\n x_conved = tf.nn.conv2d(x_tensor, W, \\n strides=[1, conv_strides[0], conv_strides[0], 1], \\n padding='SAME')\\n x_biased = tf.nn.bias_add(x_conved, b)\\n x_rected = tf.nn.relu(x_biased)\\n return tf.nn.max_pool(x_rected,\\n ksize=[1, pool_ksize[0], pool_ksize[1], 1],\\n strides=[1, pool_strides[0], pool_strides[1], 1],\\n padding='SAME')\\n\\n\\ntests.test_con_pool(conv2d_maxpool)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"2.3 Merge the employee information with the school data set
    <\\/a>\\nNext, merge the employee information into the school data set. First names differ slightly and nicknames are sometimes used (for example, \\\"Rob\\\" vs. \\\"Robert\\\"), so we match on last name and first three letters of first name.\\nTo make this more accurate in the future, consider using a fuzzy matching algorithm. 40 schools (about 30% of the data set) are lost in the matching, either due to missing principal name in the BuildBPS data set (11) or inadequate matching (29).\\n\",\"targets\":\"#Merge on last name\\nschools_df = pd.merge(schools, employees, on = 'Principal_Lastname')\\n\\n#Keep those matches that match in the first 3 letters of first name\\nschools_df['Firstname_match'] = schools_df['Principal_Firstname_x'].apply(lambda x : x[0:3].lower())\\nschools_df['Principal_Firstname_y'] = schools_df['Principal_Firstname_y'].apply(lambda x: x.lower())\\nschools_df['correct_name'] = schools_df.apply(lambda x: x['Firstname_match'] in x['Principal_Firstname_y'], axis = 1)\\nschools_df = schools_df[schools_df['correct_name']]\\n\\nschools_df.describe(include = 'all')\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"HW01-Intro_to_Pandas\\/Intro to Pandas.ipynb\\\".\\nThe first task is:\\nHierarchical indexing\\nIn the baseball example, I was forced to combine 3 fields to obtain a unique index that was not simply an integer value. A more elegant way to have done this would be to create a hierarchical index from the three fields.\\nCan you write Python code for it?\\n\",\"targets\":\"\\nbaseball_h = baseball.set_index(['year', 'team', 'player'])\\nbaseball_h.head(10)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# weights = []\\nweights = np.linspace(1,10,len(samples)) ** 2\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nAttach weights to samples\\nOptionally, one can attach weights to the samples. This alters the influence of samples on the modelled distribution.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Labs\\/Lab7\\/Lab7.ipynb\\\".\\nThe first task is:\\npandas dataframe columns can be called as python series using the syntax dataframe.columnlabel, as below, which is why it usually makes sense to define a column name\\/label that is short and has no spaces\\nCan you write Python code for it?\\n\",\"targets\":\"\\nasteroids.ra\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"An\u00e1lisis del ejemplo\\nLas ecuaciones\\nnormales y sklearn entregan el mismo resultado:\\n$$\\\\begin{aligned}\\n\\\\theta = (& 0.00, -0.10, 0.12, 0.02, 0.07, -0.22, \\\\\\n & 0.29, 0.00, -0.34, 0.29, -0.23, -0.22, 0.09, -0.41 )\\\\end{aligned}$$\\nMientras que el algoritmo lms entrega\\n$$\\\\begin{aligned}\\n \\\\theta = (&0.00, -0.10, 0.12, 0.02, 0.07, -0.21, \\\\\\n &0.29, 0.00, -0.34, 0.29, -0.23, -0.21, 0.09, -0.41 )\\\\end{aligned}$$\\nSi las variables son\\nCRIM, ZN, INDUS, CHAS, NOX, RM, AGE, DIS, RAD, TAX, PTRATIO, B, LSTAT, MEDV\\n\u00bfCu\u00e1les variables tienen m\u00e1s impacto en el precio de la vivienda?\\nAn\u00e1lisis del ejemplo\\n\\n\\n$\\\\theta_0=+0.00$.\\n\\n\\n$\\\\theta_1 = -0.10$: CRIM, per capita crime rate by town.\\n\\n\\n$\\\\theta_2 = +0.12$: ZN, proportion of residential land zoned for\\n lots over 25,000 sq.ft.\\n\\n\\n$\\\\theta_3 = +0.02$: INDUS, proportion of non-retail business acres\\n per town\\n\\n\\n$\\\\theta_4 = +0.07$: CHAS, Charles River dummy variable (= 1 if tract\\n bounds river; 0 otherwise)\\n\\n\\n$\\\\theta_5 = -0.22$: NOX, nitric oxides concentration (parts per 10\\n million)\\n\\n\\n$\\\\theta_6 = +0.29$: RM, average number of rooms per dwelling\\n\\n\\n$\\\\theta_7 = +0.00$: AGE, proportion of owner-occupied units built\\n prior to 1940\\n\\n\\n$\\\\theta_8 = -0.34$: DIS, weighted distances to five Boston\\n employment centres\\n\\n\\n$\\\\theta_9 = +0.29$: RAD, index of accessibility to radial highways\\n\\n\\n$\\\\theta_{10} = -0.23$: TAX, full-value property-tax rate per\\n \\\\$10,000\\n\\n\\n$\\\\theta_{11} = -0.22$: PTRATIO pupil-teacher ratio by town\\n\\n\\n$\\\\theta_{12} = +0.09$: B, $1000(Bk - 0.63)^2$ where Bk is the\\n proportion of blacks by town\\n\\n\\n$\\\\theta_{13} = -0.41$: LSTAT, % lower status of the population\\n\\n\\nAn\u00e1lisis del ejemplo\\n\\n\\n\u00bfEs posible graficar la soluci\u00f3n?\\n\\n\\n\u00bfC\u00f3mo sabemos si el modelo es bueno?\\n\\n\\n\u00bfCu\u00e1l es el error de entrenamiento? \u00bfCu\u00e1l es el error de predicci\u00f3n?\\n\\n\\n\u00bfPodemos utilizar el modelo para realizar predicciones?\\n\\n\\nAplicaci\u00f3n a Iris Dataset\\nRecordemos el Iris Dataset.\\n\",\"targets\":\"%matplotlib inline\\nfrom sklearn import datasets\\nimport matplotlib.pyplot as plt\\niris = datasets.load_iris()\\n\\ndef plot(dataset, ax, i, j):\\n ax.scatter(dataset.data[:,i], dataset.data[:,j], c=dataset.target, s=50)\\n ax.set_xlabel(dataset.feature_names[i], fontsize=20)\\n ax.set_ylabel(dataset.feature_names[j], fontsize=20)\\n\\n# row and column sharing\\nf, ((ax1, ax2), (ax3, ax4), (ax5,ax6)) = plt.subplots(3, 2, figsize=(16,16))\\nplot(iris, ax1, 0, 1)\\nplot(iris, ax2, 0, 2)\\nplot(iris, ax3, 1, 2)\\nplot(iris, ax4, 0, 3)\\nplot(iris, ax5, 1, 3)\\nplot(iris, ax6, 2, 3)\\nf.tight_layout()\\nplt.show()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Broadcasting typically makes your code more concise and faster, so you should strive to use it where possible.\\nThis brief overview has touched on many of the important things that you need to know about numpy, but is far from complete. Check out the numpy reference to find out much more about numpy.\\nMatplotlib\\nMatplotlib is a plotting library. In this section give a brief introduction to the matplotlib.pyplot module, which provides a plotting system similar to that of MATLAB.\\n\",\"targets\":\"import matplotlib.pyplot as plt\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"(a) The y-axis is the fraction of identifiers that were incorrectly identified by cual-id. This is the rate of false positives\\n\",\"targets\":\"fig = sns.factorplot(x=\\\"#CualIDs With Transcription Errors\\\", \\n y=\\\"Fraction Uncorrectable\\\", \\n hue=\\\"#Transcription Errors\\\",\\n col=\\\"CualID Length\\\", \\n data=df, \\n kind=\\\"bar\\\");\\nfig.savefig('figure_2_false.pdf', dpi=300)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# Get the average value of the fluid viscosity\\nmu = sp.mean(water['pore.viscosity'])\\n# Specify a pressure difference (in Pa)\\ndelta_P = 100000\\n# Using the rate method of the StokesFlow algorithm\\nQ = sp.absolute(flow.rate(pores=pn.pores('inlets')))\\n# Because we know the inlets and outlets are at x=0 and x=X\\nLx = sp.amax(pn['pore.coords'][:, 0]) - sp.amin(pn['pore.coords'][:, 0])\\nA = Lx*Lx # Since the network is cubic Lx = Ly = Lz\\nK = Q*mu*Lx\\/(delta_P*A)\\nprint(K)\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nThe resulting pressure field can be visualized in Paraview, giving the following:\\n\\nDetermination of Permeability Coefficient\\nThe way to calculate K is the determine each of the values in Darcy's law manually and solve for K, such that $$ K = \\\\frac{Q\\\\mu L} {\\\\Delta P A} $$\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"1 Mathematics and Python\\/Lectures notebooks\\/10 numpy arrays and operations with them\\/matrix_operations-.ipynb\\\".\\nThe first task is:\\n\u0420\u0430\u0441\u0441\u043c\u043e\u0442\u0440\u0438\u043c \u043e\u0434\u043d\u043e \u0438\u043d\u0442\u0435\u0440\u0435\u0441\u043d\u043e\u0435 \u0441\u0432\u043e\u0439\u0441\u0442\u0432\u043e \u043e\u043f\u0440\u0435\u0434\u0435\u043b\u0438\u0442\u0435\u043b\u044f. \u041f\u0443\u0441\u0442\u044c \u0443 \u043d\u0430\u0441 \u0435\u0441\u0442\u044c \u043f\u0430\u0440\u0430\u043b\u043b\u0435\u043b\u043e\u0433\u0440\u0430\u043c\u043c \u0441 \u0443\u0433\u043b\u0430\u043c\u0438 \u0432 \u0442\u043e\u0447\u043a\u0430\u0445 $(0, 0), (c,d), (a+c, b+d), (a, b)$ (\u0443\u0433\u043b\u044b \u0434\u0430\u043d\u044b \u0432 \u043f\u043e\u0440\u044f\u0434\u043a\u0435 \u043e\u0431\u0445\u043e\u0434\u0430 \u043f\u043e \u0447\u0430\u0441\u043e\u0432\u043e\u0439 \u0441\u0442\u0440\u0435\u043b\u043a\u0435). \u0422\u043e\u0433\u0434\u0430 \u043f\u043b\u043e\u0449\u0430\u0434\u044c \u044d\u0442\u043e\u0433\u043e \u043f\u0430\u0440\u0430\u043b\u043b\u0435\u043b\u043e\u0433\u0440\u0430\u043c\u043c\u0430 \u043c\u043e\u0436\u043d\u043e \u0432\u044b\u0447\u0438\u0441\u043b\u0438\u0442\u044c \u043a\u0430\u043a \u043c\u043e\u0434\u0443\u043b\u044c \u043e\u043f\u0440\u0435\u0434\u0435\u043b\u0438\u0442\u0435\u043b\u044f \u043c\u0430\u0442\u0440\u0438\u0446\u044b $\\\\left( \\\\begin{array}{cc} a & c \\\\ b & d \\\\end{array} \\\\right)$. \u041f\u043e\u0445\u043e\u0436\u0438\u043c \u043e\u0431\u0440\u0430\u0437\u043e\u043c \u043c\u043e\u0436\u043d\u043e \u0432\u044b\u0440\u0430\u0437\u0438\u0442\u044c \u0438 \u043e\u0431\u044a\u0435\u043c \u043f\u0430\u0440\u0430\u043b\u043b\u0435\u043b\u0435\u043f\u0438\u043f\u0435\u0434\u0430 \u0447\u0435\u0440\u0435\u0437 \u043e\u043f\u0440\u0435\u0434\u0435\u043b\u0438\u0442\u0435\u043b\u044c \u043c\u0430\u0442\u0440\u0438\u0446\u044b \u0440\u0430\u0437\u043c\u0435\u0440\u0430 $3 \\\\times 3$.\\n7. \u0420\u0430\u043d\u0433 \u043c\u0430\u0442\u0440\u0438\u0446\u044b\\n\u041d\u0430\u043f\u043e\u043c\u0438\u043d\u0430\u043d\u0438\u0435 \u0442\u0435\u043e\u0440\u0438\u0438. \u0420\u0430\u043d\u0433\u043e\u043c \u043c\u0430\u0442\u0440\u0438\u0446\u044b $A$ \u043d\u0430\u0437\u044b\u0432\u0430\u0435\u0442\u0441\u044f \u043c\u0430\u043a\u0441\u0438\u043c\u0430\u043b\u044c\u043d\u043e\u0435 \u0447\u0438\u0441\u043b\u043e \u043b\u0438\u043d\u0435\u0439\u043d\u043e \u043d\u0435\u0437\u0430\u0432\u0438\u0441\u0438\u043c\u044b\u0445 \u0441\u0442\u0440\u043e\u043a (\u0441\u0442\u043e\u043b\u0431\u0446\u043e\u0432) \u044d\u0442\u043e\u0439 \u043c\u0430\u0442\u0440\u0438\u0446\u044b.\\n\u0412 NumPy \u0440\u0430\u043d\u0433 \u043c\u0430\u0442\u0440\u0438\u0446\u044b \u0432\u044b\u0447\u0438\u0441\u043b\u044f\u0435\u0442\u0441\u044f \u0441 \u043f\u043e\u043c\u043e\u0449\u044c\u044e \u0444\u0443\u043d\u043a\u0446\u0438\u0438 numpy.linalg.matrix_rank(M, tol=None), \u0433\u0434\u0435 M \u2014 \u043c\u0430\u0442\u0440\u0438\u0446\u0430, tol \u2014 \u043f\u0430\u0440\u0430\u043c\u0435\u0442\u0440, \u043e\u0442\u0432\u0435\u0447\u0430\u044e\u0449\u0438\u0439 \u0437\u0430 \u043d\u0435\u043a\u043e\u0442\u043e\u0440\u0443\u044e \u0442\u043e\u0447\u043d\u043e\u0441\u0442\u044c \u0432\u044b\u0447\u0438\u0441\u043b\u0435\u043d\u0438\u044f. \u0412 \u043f\u0440\u043e\u0441\u0442\u043e\u043c \u0441\u043b\u0443\u0447\u0430\u0435 \u043c\u043e\u0436\u043d\u043e \u0435\u0433\u043e \u043d\u0435 \u0437\u0430\u0434\u0430\u0432\u0430\u0442\u044c, \u0438 \u0444\u0443\u043d\u043a\u0446\u0438\u044f \u0441\u0430\u043c\u0430 \u043e\u043f\u0440\u0435\u0434\u0435\u043b\u0438\u0442 \u043f\u043e\u0434\u0445\u043e\u0434\u044f\u0449\u0435\u0435 \u0437\u043d\u0430\u0447\u0435\u043d\u0438\u0435 \u044d\u0442\u043e\u0433\u043e \u043f\u0430\u0440\u0430\u043c\u0435\u0442\u0440\u0430.\\nCan you write Python code for it?\\n\",\"targets\":\"\\na = np.array([[1, 2, 3], [1, 1, 1], [2, 2, 2]])\\nr = np.linalg.matrix_rank(a)\\n\\nprint \\\"\u041c\u0430\u0442\u0440\u0438\u0446\u0430:\\\\n\\\", a\\nprint \\\"\u0420\u0430\u043d\u0433 \u043c\u0430\u0442\u0440\u0438\u0446\u044b:\\\", r\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Association Rules Mining Filtering\\n\",\"targets\":\"df_books_bigReviews = pd.DataFrame(df_new_03[['asin','filteredText']].groupby(['asin'])['filteredText'].progress_apply(list))\\ndf_books_bigReviews = df_books_bigReviews.reset_index()\\ndf_books_bigReviews = df_books_bigReviews.assign(transactions = df_books_bigReviews['filteredText'].progress_apply(lambda reviews_lis:len(reviews_lis)))\\ndf_books_bigReviews.head()\\n\\nfrom apyori import apriori\\n\\n# Support\\n# Support is an indication of how frequently the itemset appears in the dataset.\\n# Confidence\\n# Confidence is an indication of how often the rule has been found to be true.\\n# Lift\\n# The ratio of the observed support to that expected if X and Y were independent.\\ndef apply_arm(transactions):\\n return list(apriori(transactions, min_support = 1\\/len(transactions), min_confidence = 1, min_lift = len(transactions), max_length = 4))\\n\\nbooks_with_arm = df_books_bigReviews.assign(arm = df_books_bigReviews['filteredText'].progress_apply(lambda list_of_reviews:apply_arm(list_of_reviews)))\\nbooks_with_arm.head()\\n\\ndef get_important_nouns(arms):\\n imp_nns = []\\n if \\\"items\\\" in pd.DataFrame(arms).keys():\\n results = list(pd.DataFrame(arms)['items'])\\n for result in results:\\n if len(list(result)) > 4:\\n imp_nns = imp_nns + list(list(result))\\n if(len(imp_nns)==0):\\n for result in results:\\n if len(list(result)) > 3:\\n imp_nns = imp_nns + list(list(result)) \\n return list(set(imp_nns))\\n return list(set(imp_nns))\\n\\nimp_nns_df = books_with_arm.assign(imp_nns = books_with_arm['arm']\\n .progress_apply(lambda arms:get_important_nouns(arms)))\\nimp_nns_df.head()\\n\\nimp_nns_df = imp_nns_df[['asin','imp_nns']]\\nimp_nns_df.head()\\n\\nimp_nns_df.to_pickle(\\\"..\\/data\\/interim\\/005_important_nouns.p\\\")\\n\\nimp_nns_df = imp_nns_df.assign(num_of_imp_nouns = imp_nns_df['imp_nns'].progress_apply(lambda imp_nouns:len(imp_nouns)))\\nimp_nns_df.head()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Algoritmo Baseline\\n\",\"targets\":\"df = pd.read_csv(\\\"insurance.csv\\\", header=None, names=['n_rei', 'pgtTot'])\\n\\ndm = DummyRegressor()\\nparam_grid = {\\\"strategy\\\": [\\\"mean\\\", \\\"median\\\"]}\\nss = ShuffleSplit(n_splits=1, test_size=.4, random_state=100)\\n\\ncv = GridSearchCV(dm, cv=ss, param_grid=param_grid, scoring=\\\"neg_mean_squared_error\\\")\\n\\ncv.fit(df[['n_rei']], df['pgtTot'])\\n\\ncv.best_score_ * -1\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"20-10-14-ml-workcamp\\/wc-arbeiten-tf-11-aufgabe.ipynb\\\".\\nThe first task is:\\n

    Auslesen der Werte f\u00fcr Layer c<\\/h2>\\nCan you write Python code for it?\\n\",\"targets\":\"\\n# Einlesen der Gewichte und Bias Werte aus Layer c - hat den Index 2 und\\n# wurde in die Variable input3 ausgelesen\\nweights2, biases2 = input3.get_weights()\\n\\n# Ausgabe der Gewichte in Layer c\\nweights2\\n\\n# Aus gabe der Bias Werte in Layer b\\nbiases2\\n\\nprint(\\\"Das sind die Parameter in Layer a: {}\\\".format(input1.get_weights()))\\n\\n\\nprint(\\\"Das sind die Parameter in Layer b: {}\\\".format(input2.get_weights()))\\n\\nprint(\\\"Das sind die Parameter in Layer c: {}\\\".format(input3.get_weights()))\\n\\n#\\n# Versuchen Sie einmal in unserer Aufgabe 10 die\\n# Werte des Layers im Modell nn zu ermitteln !\\n#\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"04-notebook.ipynb\\\".\\nThe first task is:\\nStellen Sie au\u00dferdem sicher, dass folgende S\u00e4tze KEINE Analyse liefern!\\nCan you write Python code for it?\\n\",\"targets\":\"\\nnegative_examples = [\\n \\\"Mann gibt Frau Buch\\\",\\n \\\"Mann schl\u00e4ft\\\"\\n]\\n\\ngrammar4 = \\\"\\\"\\\"\\n S -> NP VP\\n NP -> DET N\\n DET -> \\\"der\\\" | \\\"die\\\" | \\\"das\\\"\\n N -> \\\"Mann\\\" | \\\"Frau\\\" | \\\"Buch\\\"\\n VP -> V NP NP\\n V -> \\\"gibt\\\" | \\\"schenkt\\\"\\n\\\"\\\"\\\"\\n\\ntest_grammar(grammar4, test_sentences)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Comb Test\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nBitWidth=16\\nL=2**(BitWidth)\\nmaxminV =2*L \\nPeeker.clear()\\n\\nx_in=Signal(intbv(0, min=-L, max=L)); Peeker(x_in, 'x_in')\\n\\ny_out=Signal(intbv(0, min=-maxminV, max=maxminV)); Peeker(y_out, 'y_out')\\nCombTracker=[]\\n\\nDataValIn=Signal(bool(0)); Peeker(DataValIn, 'DataValIn')\\nDataValOut=Signal(bool(0)); Peeker(DataValOut, 'DataValOut')\\n\\nclk=Signal(bool(0)); Peeker(clk, 'clk')\\nrst=Signal(bool(0)); Peeker(rst, 'rst')\\n\\n\\nDUT=CombSection(x_in=x_in, y_out=y_out, \\n DataValIn=DataValIn, DataValOut=DataValOut, \\n clk=clk, rst=rst, \\n MthOrderParm=1)\\n\\n\\n\\ndef CombSection_TB():\\n TestValueGen=TestValues['GenValue'].iteritems()\\n \\n @always(delay(1)) \\n def clkgen():\\n clk.next = not clk\\n \\n @instance\\n def stimules():\\n #test the DataValIn responce\\n for step, val in TestValueGen:\\n x_in.next=int(val)\\n if step<12:\\n DataValIn.next=False\\n \\n elif step==12:\\n DataValIn.next=True\\n \\n elif step==100:\\n rst.next=True\\n DataValIn.next=False\\n \\n elif step==101:\\n rst.next=False\\n DataValIn.next=False\\n \\n elif step==112:\\n DataValIn.next=False\\n \\n else:\\n DataValIn.next=True\\n CombTracker.append(int(y_out))\\n \\n yield clk.negedge\\n \\n raise StopSimulation\\n \\n return instances()\\n \\n\\nN=TestValues.shape[0]\\nsim = Simulation(DUT, CombSection_TB(), *Peeker.instances()).run()\\nPeeker.to_wavedrom(start_time=40, stop_time=60, tock=True)\\n\\nplt.stem(TestValues['Time[s]'], TestValues['GenValue']-np.array(CombTracker))\\nplt.ticklabel_format(style='sci', axis='x', scilimits=(0,0))\\n\\n\\nBitWidth=16\\nL=2**(BitWidth)\\nmaxminV =2*L \\nPeeker.clear()\\n\\nx_in=Signal(intbv(0, min=-L, max=L)); Peeker(x_in, 'x_in')\\n\\ny_out=Signal(intbv(0, min=-maxminV, max=maxminV)); Peeker(y_out,...\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"

    Regions<\\/h2>\\n\",\"targets\":\"regions_chart = c3py.Chart('regions_chart')\\n\\nregions_chart.data.line(x=[1, 2, 3, 4, 5], y=[10, 40, 20, 30, 50], label='line')\\n\\nregions_chart.regions.add(name='x_region', axis='x', start=2, end=4, color='red')\\nregions_chart.regions.add(name='y_region', axis='y', start=40, end=50, color='green')\\n\\nHTML(regions_chart.get_html_string())\\n\\n%%javascript\\n\\n$.each($(\\\":header\\\"), function(index, value) {\\n value.id = value.innerText.replace(\\/\\\\s+\\/g, '').toLowerCase();\\n var heading_level = parseInt(value.localName.charAt(1));\\n $(\\\"#table_of_contents\\\").append(Array((heading_level - 1) * 6).join(\\\" \\\") + \\\"\\\" + value.innerText + \\\"<\\/a>
    \\\");\\n});\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Controlling dreams\\nThe image detail generation method described above tends to produce some patterns more often the others. One easy way to improve the generated image diversity is to tweak the optimization objective. Here we show just one of many ways to do that. Let's use one more input image. We'd call it a \\\"guide\\\".\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nguide = np.float32(PIL.Image.open('flowers.jpg'))\\nshowarray(guide)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"API k testov\u00e1n\u00ed\\nAPI k testov\u00e1n\u00ed vytvo\u0159il Glutexo a obsahuje pro n\u00e1s zaj\u00edmav\u00e9 informace nejen o \u010desk\u00fdch prezidentech. Zdrojov\u00e9 k\u00f3dy v jazyce Ruby jsou i s kompletn\u00edm popisem API dostupn\u00e9 na GitHubu a API samotn\u00e9 pak na adrese http:\\/\\/pyladies-json.herokuapp.com\\/.\\n\",\"targets\":\"data = requests.get('http:\\/\\/pyladies-json.herokuapp.com\\/prezidenti\\/v\u0161ichni')\\ndata.raise_for_status()\\nprint(data.text)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"LogGabor_fit_example.ipynb\\\".\\nThe first task is:\\nTo install the dependencies related to running this notebook, see Installing notebook dependencies.\\nBack to top\\nCan you write Python code for it?\\n\",\"targets\":\"\\nimport os\\nimport numpy as np\\nnp.set_printoptions(formatter={'float': '{: 0.3f}'.format})\\n%matplotlib inline\\nimport matplotlib.pyplot as plt\\nfig_width = 12\\nfigsize=(fig_width, .618*fig_width)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"%%bash\\nrm -r tpu_models\\/hub\\nmkdir tpu_models\\/hub\\ntar xvzf 4?tf-hub-format=compressed -C tpu_models\\/hub\\/\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nThis model is still compressed, so lets uncompress it with the tar command below and place it in our tpu_models directory.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"DataScienceWorkshop.ipynb\\\".\\nThe first task is:\\nAdd Another Feature\\nLet's add a new feature MaritalStatus to our model. We run into a problem in that Marital Status is a feature of different strings. We need to find a way to 'encode' this value into a numeric value.\\nThe method we will use here is called 'dummy encoding' or 'one-hot' encoding. We will create a new feature for each string in the old feature and then give that row a 1 for the new feature if it takes that value (e.g. we would create a 'Divorced' feature and then place a 1 in each row that had an MaritalStatus value 'Divorced'). Bonus: there are some 'fancier' ways that one could use to encode categorical variables for instance in this blog post.\\nCan you write Python code for it?\\n\",\"targets\":\"\\n# We could also do this directly in sklearn:\\n#https:\\/\\/scikit-learn.org\\/stable\\/modules\\/generated\\/sklearn.preprocessing.OneHotEncoder.html\\nmarital_status_dummies = pd.get_dummies(df['MaritalStatus'])\\n\\nmarital_status_dummies.head()\\n\\n#\\n\\nX = pd.get_dummies(df[use_columns])\\n\\n#\\n\\n# now that we have new columns we need to redo our train\\/test split\\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state=3523523)\\n\\nmodel.fit(X_train, y_train)\\n\\ny_pred = model.predict_proba(X_test)[:, 1]\\nmake_auc_plot(y_test, y_pred)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"4_Visibility_Space\\/4_5_1_UV_Coverage_UV_Tracks.ipynb\\\".\\nThe first task is:\\nFigure 4.4.2: $uvw$ track derived from the simulation and projection in the $uv$-plane.\\nThe track in the $uvw$ space are curves and the projection in the $uv$ plane are arcs. Let's focus more on the projection of the track in this plane. To get an observation-independent knowledge of the track, we can try to combine the three equations of $u$, $v$ and $w$ with the objective of eliminating $H$ from the equation. We end up an equation linking $u$, $v$, $X$ and $Y$ (the full derivation can be found in $\\\\S$ A.3 ➞):\\n$$\\\\boxed{u^2 + \\\\left[ \\\\frac{v -\\\\frac{Z}{\\\\lambda} \\\\cos \\\\delta}{\\\\sin \\\\delta} \\\\right]^2 = \\\\left[ \\\\frac{X}{\\\\lambda} \\\\right]^2 + \\\\left[ \\\\frac{Y}{\\\\lambda} \\\\right]^2}$$\\nOne can note that in this particular case, the $uv$ tracks is described by the equation of an ellipsis.\\nThis ellipsis is centered at $(0,\\\\frac{Z}{\\\\lambda} \\\\cos \\\\delta)$ in the ($u$,$v$) plane.\\nThe major axis of $a=\\\\frac{\\\\sqrt{X^2 + Y^2}}{\\\\lambda}$.\\nThe minor axis along the axis $v$ will be a function of $Z$, $\\\\delta$ and $a$.\\nTo verify this fact, let's plot the theoretical ellipsis over the observed portion of the track. (You can fall back to the duration of the observation to see that the track is mapping exactly the ellipsis).\\nCan you write Python code for it?\\n\",\"targets\":\"\\n%matplotlib inline\\nfrom matplotlib.patches import Ellipse\\n\\n# parameters of the UVtrack as an ellipse\\na=np.sqrt(X**2+Y**2)\\/lam\\/1e3 # major axis \\nb=a*np.sin(dec) # minor axis\\nv0=Z\\/lam*np.cos(dec)\\/1e3 # center of ellipse\\n\\nplotBL.UVellipse(u,v,w,a,b,v0)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Bigram tagging with HMMs.ipynb\\\".\\nThe first task is:\\nLet's try with some toy data\\nExample taken from Borodovsky & Ekisheva (2006), pp 80-81\\nCan you write Python code for it?\\n\",\"targets\":\"\\nfrom bigram_tagging import testing_viterbi\\ntesting_viterbi()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# Create a window and a circle\\nwin = visual.Window() \\ncircle = visual.Circle(win, radius=0.1)\\n\\n# Show the circle until keypress\\ncircle.draw()\\nwin.flip()\\nevent.waitKeys()\\n\\n# Close the window\\nwin.close()\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nExample 2\\nA slightly more advanced example, where\\n - A window is opened\\n - A circle is shown until the participant presses any key\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Data Analysis\\nThe first thing we do is read the data into a panda dataframe\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n#Importing Library\\nimport pandas as pd\\n\\n#Reading the csv\\ndata = pd.read_csv('PoliticiansArticleQualityWithCountryPopulation.csv', encoding='utf8')\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Create Dataframe\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n# Create empty dataframe\\ndf = pd.DataFrame()\\n\\n# Create a column\\ndf['name'] = ['John', 'Steve', 'Sarah']\\ndf['gender'] = ['Male', 'Male', 'Female']\\ndf['age'] = [31, 32, 19]\\n\\n# View dataframe\\ndf\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Again, Kenneth Lay stands out with a total stock value of over $49mil. This is a real observations, so I decide to keep it for now.\\n\",\"targets\":\"sp = data_df.plot.scatter(x = 'salary', y = 'total_stock_value', c = 'poi', edgecolors = 'Blue', \\n s = 50)\\nsp.set_yscale('log')\\nsp.set_ylim(1.0e4, 1.5e8)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Chapters\\/Three\\/Feature Extraction.ipynb\\\".\\nThe first task is:\\nSpace-ef cient feature vectorizing with the hashing trick\\nIn this chapter's previous examples, a dictionary containing all of the corpus's unique tokens is used to map a document's tokens to the elements of a feature vector. Creating this dictionary has two drawbacks. First, two passes are required over the corpus: the rst pass is used to create the dictionary and the second pass is used to create feature vectors for the documents. Second, the dictionary must be stored in memory, which could be prohibitive for large corpora. It is possible to avoid creating this dictionary through applying a hash function to the token to determine its index in the feature vector directly. This shortcut is called the hashing trick. The following example uses HashingVectorizer to demonstrate the hashing trick:\\nCan you write Python code for it?\\n\",\"targets\":\"\\nfrom sklearn.feature_extraction.text import HashingVectorizer\\ncorpus = ['the', 'ate', 'bacon', 'cat']\\n\\nvectorizer = HashingVectorizer(n_features=6)\\nprint(vectorizer.transform(corpus).todense())\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"03-ahorros.ipynb\\\".\\nThe first task is:\\nEjercicio.-- Usando Microsoft Excel u otra herramienta solucione el siguiente problema: Se abre una cuenta de ahorro que paga el 24% nominal, y durante 12 meses se hacen dep\u00f3sitos de $ 100 al principio del mes. A partir del mes 12 (final del mes) realiza retiros mensuales de $ 60 hasta que no quede dinero en la cuenta. \u00bfCu\u00e1nto es la suma aritm\u00e9tica de los intereses recibidos?\\nsavings(deposits, nrate, initbal=0)\\nPermite modelar cuentas de ahorro.\\n\\ndeposits-- flujo de efectivo que representa los dep\u00f3sitos\\/retiros.\\nnrate -- tasa nominal de inter\u00e9s.\\ninitbal -- balance inicial de la cuenta.\\n\\nEjemplo.-- Se abre una cuenta de ahorro que paga el 24% nominal, y durante 12 meses se hacen dep\u00f3sitos de $ 100 al principio del mes. \u00bfCu\u00e1nto es la suma aritm\u00e9tica de los intereses recibidos?\\nCan you write Python code for it?\\n\",\"targets\":\"\\ncflo = cf.cashflow(const_value=[100] * 11 + [0], start='2016-1', freq='M')\\ncflo\\n\\nnrate = cf.interest_rate([24] * 12, start='2016-1', freq='M')\\nnrate\\n\\ncf.savings(deposits = cflo, # dep\u00f3sito periodico \\n initbal = 100, # balance inicial\\n nrate = nrate) #\u00a0tasa de inter\u00e9s nominal\\n\\nx = cf.savings(deposits = cflo, # dep\u00f3sito per\u00edodico \\n initbal = 100, # balance inicial\\n nrate = nrate) #\u00a0tasa de inter\u00e9s nominal \\n\\nx.Earned_Interest\\n\\n## intereses como lista \\nx.Earned_Interest.tolist()\\n\\n## suma aritm\u00e9tica de los intereses\\nsum(x.Earned_Interest)\\n\\n# balance final\\nx.Ending_Balance[-1]\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"As a check, adding up all the one-hot one's, we should get a total of 10000:\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n980 + \\\\\\n1135+ \\\\\\n1032+ \\\\\\n1010+ \\\\\\n982 + \\\\\\n892 + \\\\\\n958 + \\\\\\n1028+ \\\\\\n974 + \\\\\\n1009\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Check below an interactive version of the 3D plot. May help when there are complicated paths between points. You need to install plotly and colorlover (with pip) if you don't already have them.\\n\",\"targets\":\"import plotly.graph_objs as go\\nimport colorlover as cl\\nfrom plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot\\ninit_notebook_mode(connected=True)\\n\\ncb = cl.to_rgb(tuple(map(tuple, c*255)))\\ntrace = go.Scatter3d(\\n name='TSP Sover',\\n x = c[:,0], y = c[:,1], z = c[:,2],\\n marker = dict(\\n size=4.,\\n color=cb\\n ),\\n line=dict(\\n color='#000',\\n width=1,\\n ),\\n )\\ndata = [trace]\\n\\n# Set the different layout properties of the figure:\\nlayout = go.Layout(\\n autosize=False,\\n width=600,\\n height=600,\\n margin = dict(\\n t=0,b=0,l=0,r=0\\n ),\\n scene = go.Scene(\\n xaxis=dict(\\n gridcolor='rgb(255, 255, 255)',\\n zerolinecolor='rgb(255, 255, 255)',\\n showbackground=True,\\n backgroundcolor='rgb(230, 230,230)'\\n ),\\n yaxis=dict(\\n gridcolor='rgb(255, 255, 255)',\\n zerolinecolor='rgb(255, 255, 255)',\\n showbackground=True,\\n backgroundcolor='rgb(230, 230,230)'\\n ),\\n zaxis=dict(\\n gridcolor='rgb(255, 255, 255)',\\n zerolinecolor='rgb(255, 255, 255)',\\n showbackground=True,\\n backgroundcolor='rgb(230, 230,230)'\\n ),\\n aspectmode='cube',\\n camera=dict( \\n eye=dict(\\n x=1.7,\\n y=-1.7,\\n z=1,\\n )\\n ),\\n )\\n)\\n\\nfig = go.Figure(data=data, layout=layout)\\niplot(fig, show_link=False)\\n\\nfrom scipy.spatial import cKDTree\\nkdtree = cKDTree(c)\\n\\ndx, ix = kdtree.query(im_)\\n\\nplt.imshow(ix.reshape((h, w)), cmap='viridis')\\nplt.colorbar()\\nplt.show()\\n\\nplt.imshow(dx.reshape((h, w)))\\nplt.colorbar()\\nplt.show()\\n\\nfig = plt.figure(figsize=(18, 5))\\n\\nax0 = fig.add_subplot(131)\\nplt.imshow(im, interpolation='none')\\n\\nax1 = fig.add_subplot(132, projection='3d')\\nax1.scatter(*c.T, c=c, lw=0, s=40, alpha=1)\\nax1.plot(*c.T, color='k', alpha=0.5)\\nax1.text(*c[0], ' start')\\nax1.text(*c[-1], ' end')\\n\\nax2 =...\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Below are some examples of foods within the 6 groups made by the K-Means algorithm. We present to you, for fun, a small list of foods in each category, revealing some interesting wonders, as well as potentially some flaw in our design choices.\\n\",\"targets\":\"def Examples(df, label_int, si = [0,5]):\\n \\\"\\\"\\\" Creates a 3D log log plot on the requested subplot.\\n Arguments: \\n pred_cat = predicted dataframe for a category\\n actual_cat = dataframe of the real category\\n ax = plt axis instance\\n actual_label = string with label for the actual category\\n colors = list with two entries of strings for color names\\n \\\"\\\"\\\"\\n TP = df[(df.Label == label_int)&(df.correct_guess==True)]\\n FP = df[(df.guess == label_int)&(df.correct_guess==False)]\\n print(\\\"Guessed Similar:\\\")\\n print(TP[\\\"Food Name\\\"][si[0]:si[1]])\\n print(\\\"\\\\nSurprising:\\\")\\n print(FP[\\\"Food Name\\\"][si[0]:si[1]])\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\n\\n
    \\n \\n \\\"Colab Run in Colab\\n <\\/a>\\n <\\/td>\\n \\n \\n \\\"GitHub\\n View on GitHub\\n <\\/a>\\n <\\/td>\\n \\n \\n \\\"Vertex\\n Open in Vertex AI Workbench\\n <\\/a>\\n <\\/td> \\n<\\/table>\\n\\nOverview\\nThis tutorial walks through building a custom container to serve a scikit-learn model on Vertex AI Prediction. You will use the FastAPI Python web server framework to create a prediction and health endpoint.\\nYou will also enable explanations for the endpoint\\nDataset\\nThis tutorial uses R.A. Fisher's Iris dataset, a small dataset that is popular for trying out machine learning techniques. Each instance has four numerical features, which are different measurements of a flower, and a target label that\\nmarks it as one of three types of iris: Iris setosa, Iris versicolour, or Iris virginica.\\nThis tutorial uses the copy of the Iris dataset included in the\\nscikit-learn library.\\nObjective\\nThe goal is to:\\n- Train a model that uses a flower's measurements as input to predict what type of iris it is.\\n- Save the model and its serialized...\\n\",\"targets\":\"%%writefile requirements.txt\\njoblib~=1.0\\nnumpy~=1.20\\nscikit-learn~=0.24\\ngoogle-cloud-storage>=1.26.0,<2.0.0dev\\n\\n# Required in Docker serving container\\n%pip install -U --user -r requirements.txt\\n\\n# For local FastAPI development and running\\n%pip install -U --user \\\"uvicorn[standard]>=0.12.0,<0.14.0\\\" fastapi~=0.63\\n\\n# Vertex SDK for Python\\n%pip install -U --user google-cloud-aiplatform\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# print the earliest year in the data\\nprint(data.Year.min())\\n\\n# print the latest year in the data\\nprint(data.Year.max())\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nPlotting a histogram\\nLets learn how to plot histograms. We will plot the number of prizes awarded per year. Nobel prizes can be awarded for up to three people per category. As each winner is recorded as an individual entry the histogram will tell us if there has been a trend of increasing or decreasing multiple prize winners in one year.\\nHowever before we plot the histogram we should find information out about the data so that we can check the range of the data we want to plot.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"2.2 Crime Data\\n2.2.1 Import Crime Data from NYPD:\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nurl = 'http:\\/\\/data.beta.nyc\\/\\/dataset\\/c874f72e-23a6-404f-ad83-fdd2aaa104bb\\/resource\\/3377cc50-044b-4806-b062-7b0c14351457\\/download\\/vol2351.csv'\\nCrime = pd.read_csv(url, sep=',', thousands=',') # tab (\\\\t) separated values \\n#print('Variable dtypes:\\\\n', Crime.dtypes, sep=',')\\nCrime.head()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"2_IPython.ipynb\\\".\\nThe first task is:\\nWe create an HTML table dynamically with Python, and we display it in the (HTML-based) notebook.\\nCan you write Python code for it?\\n\",\"targets\":\"\\nHTML('''\\n\\n''' + \\n''.join(['' + \\n ''.join(['
    {row},{col}<\\/td>'.format(\\n row=row, col=col\\n ) for col in range(5)]) +\\n '<\\/tr>' for row in range(5)]) +\\n'''\\n<\\/table>\\n''')\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Compute the annual sum for each lat\\/lng pair\\nWe can sum all of the monthly runoff values using NumPy, specifying the axis we want to sum\\n\",\"targets\":\"# Sum the values in axis '0' (time), which reduces the dimension of our array\\nannSum = param_vals[:,:,:].sum(axis=0)\\nannSum.shape\\n\\n#Now that it's two dimensions, we can convert it into a dataFrame\\ndfParam = pd.DataFrame(annSum)\\ndfParam.shape\\n\\n#We can again retrieve a runoff value for a specific lat-lng pair\\n# This is the sum of the value computed above...\\ndfParam.iloc[100,120]\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"01-IPythonHintsAndTips.ipynb\\\".\\nThe first task is:\\nThe%run magic runs a scipt along the stated path and prints the results to the cell output. Useful to run external scripts not coded in the notebook itself. Just be sure to copy the script with the notebook. The next cell writes a script to the current directory and then the following cell executes it.\\nCan you write Python code for it?\\n\",\"targets\":\"\\n%%file helloipython.py\\nprint('Hello IPython!')\\n\\n%run helloipython.py\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Building a dynamic model\\nIn the previous notebook, mnist_linear.ipynb<\\/a>, we ran our code directly from the notebook. In order to run it on the AI Platform, it needs to be packaged as a python module.\\nThe boilerplate structure for this module has already been set up in the folder mnist_models. The module lives in the sub-folder, trainer, and is designated as a python package with the empty __init__.py (mnist_models\\/trainer\\/__init__.py) file. It still needs the model and a trainer to run it, so let's make them.\\nLet's start with the trainer file first. This file parses command line arguments to feed into the model.\\n\",\"targets\":\"%%writefile mnist_models\\/trainer\\/task.py\\nimport argparse\\nimport json\\nimport os\\nimport sys\\n\\nfrom . import model\\n\\n\\ndef _parse_arguments(argv):\\n \\\"\\\"\\\"Parses command-line arguments.\\\"\\\"\\\"\\n parser = argparse.ArgumentParser()\\n parser.add_argument(\\n '--model_type',\\n help='Which model type to use',\\n type=str, default='linear')\\n parser.add_argument(\\n '--epochs',\\n help='The number of epochs to train',\\n type=int, default=10)\\n parser.add_argument(\\n '--steps_per_epoch',\\n help='The number of steps per epoch to train',\\n type=int, default=100)\\n parser.add_argument(\\n '--job-dir',\\n help='Directory where to save the given model',\\n type=str, default='mnist_models\\/')\\n return parser.parse_known_args(argv)\\n\\n\\ndef main():\\n \\\"\\\"\\\"Parses command line arguments and kicks off model training.\\\"\\\"\\\"\\n args = _parse_arguments(sys.argv[1:])[0]\\n\\n # Configure path for hyperparameter tuning.\\n trial_id = json.loads(\\n os.environ.get('TF_CONFIG', '{}')).get('task', {}).get('trial', '')\\n output_path = args.job_dir if not trial_id else args.job_dir + '\\/'\\n\\n model_layers = model.get_layers(args.model_type)\\n image_model = model.build_model(model_layers, args.job_dir)\\n model_history = model.train_and_evaluate(\\n image_model, args.epochs, args.steps_per_epoch, args.job_dir)\\n\\n\\nif __name__ == '__main__':\\n main()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"ICO Data Object Example.ipynb\\\".\\nThe first task is:\\nThe data object can be accessed like a dictionary to the underlying Dataframes. These will be transformed on their first access into a normalized form. (This might take awhile for the first access)\\nCan you write Python code for it?\\n\",\"targets\":\"\\nstart = time.time()\\ndata[\\\"all_encounter_data\\\"]\\nprint(time.time() - start)\\n\\ndata[\\\"all_encounter_data\\\"].describe(include='all')\\n\\ndata[\\\"all_encounter_data\\\"].columns.values\\n\\ndata['all_encounter_data'].shape[0]\\n\\ndata['all_encounter_data'].to_pickle('all_encounter_data_Dan_20170415.pickle')\\n\\nstart = time.time()\\ndata[\\\"all_person_data\\\"]\\nprint(time.time() - start)\\n\\ndata[\\\"all_person_data\\\"].describe(include='all')\\n\\ndata[\\\"all_person_data\\\"].columns.values\\n\\ndata['all_person_data'].shape[0]\\n\\ndata['all_person_data'].to_pickle('all_person_data_Dan_20170415.pickle')\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"NLTK_Basics.ipynb\\\".\\nThe first task is:\\nuse the method nltk.ConditionalFreqDist() to calculate the conditional freq. distribution:\\nhere's a built-in corpus which records the inaugural speeches of presidents of the U.S.\\nCan you write Python code for it?\\n\",\"targets\":\"\\nfrom nltk.corpus import inaugural\\n\\nprint(len(inaugural.fileids()))\\nprint([fileid for fileid in inaugural.fileids()])\\n\\nimport nltk\\ncfd=nltk.ConditionalFreqDist((target,fileid[:4]) for fileid in inaugural.fileids() for w in inaugural.words(fileid) for target in ['america','citizen'] if w.lower().startswith(target))\\ncfd.plot()\\ncfd.tabulate()\\n\\n[(w.lower()) for fileid in inaugural.fileids() for w in inaugural.words(fileid) for target in ['america','citizen'] if w.lower().startswith(target)][:20]\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Lab04\\/ML_lab4_ctr_student.ipynb\\\".\\nThe first task is:\\n(4f) Validation log loss \\nNext, following the same logic as in Parts (4c) and 4(e), compute the validation log loss for both the baseline and logistic regression models. Notably, the baseline model for the validation data should still be based on the label fraction from the training dataset.\\nCan you write Python code for it?\\n\",\"targets\":\"\\n# TODO: Replace with appropriate code\\nlogLossValBase = OHEValidationData.map(lambda x : computeLogLoss(classOneFracTrain, x.label)).sum() \\/ OHEValidationData.count()\\n\\nlogLossValLR0 = evaluateResults(model0, OHEValidationData)\\nprint ('OHE Features Validation Logloss:\\\\n\\\\tBaseline = {0:.3f}\\\\n\\\\tLogReg = {1:.3f}'\\n .format(logLossValBase, logLossValLR0))\\n\\n# TEST Validation log loss (4f)\\nTest.assertTrue(np.allclose(logLossValBase, 0.527603), 'incorrect value for logLossValBase')\\nTest.assertTrue(np.allclose(logLossValLR0, 0.456957), 'incorrect value for logLossValLR0')\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# First we load the ascii submodule:\\nfrom astropy.io import ascii\\n\\nexample_csv = ascii.read('http:\\/\\/samplecsvs.s3.amazonaws.com\\/Sacramentorealestatetransactions.csv')\\nprint(example_csv)\\n\\n# We can also read Astronomy-specific formats.\\n# For example, IPAC formatted files\\nexample_ipac = ascii.read('http:\\/\\/exoplanetarchive.ipac.caltech.edu\\/docs\\/tblexamples\\/IPAC_ASCII_one_header.tbl')\\nprint(example_ipac)\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nAstropy ASCII file reader\\nWhile a number of ASCII file readers exist (including numpy.genfromtxt, numpy.loadtxt, and pandas.read_*), Astropy includes readers text file formats commonly used in Astronomy.\\nThese are read as an Astropy Table object, which are convertable to numpy arrays or pandas DataFrames. These can contain unit information and there is work on-going to incoporate uncertainities.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Clustering_&_Retrieval\\/Week2\\/Assignment2\\/1_nearest-neighbors-lsh-implementation_blank.ipynb\\\".\\nThe first task is:\\nTo visualize these Gaussian random vectors, let's look at an example in low-dimensions. Below, we generate 3 random vectors each of dimension 5.\\nCan you write Python code for it?\\n\",\"targets\":\"\\n# Generate 3 random vectors of dimension 5, arranged into a single 5 x 3 matrix.\\nnp.random.seed(0) # set seed=0 for consistent results\\ngenerate_random_vectors(num_vector=3, dim=5)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Create a PV array\\nImport the OrderedPVArray class and create a PV array object using the parameters above\\n\",\"targets\":\"from pvfactors.geometry import OrderedPVArray\\n# Create pv array\\npvarray = OrderedPVArray.fit_from_dict_of_scalars(pvarray_parameters)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"<\\/a>\\nGeometric distributions\\nGeometric distributions are often used to model the number of Bernoulli($p$) (independent, success\\/failure) trials needed to achieve the first success. A geometric distribution with parameter p (in [0,1]) is specified by the probability mass function\\n$$\\np(1-p)^{x-1}, \\\\quad x = 1, 2, 3,\\\\ldots \\n$$\\nIf $X$ is a random variable with a Geometric(p) distribution then\\n$$\\n\\\\begin{align}\\nE(X) & = \\\\frac{1}{p} \\\\\\nVar(X) & = \\\\frac{(1 - p)}{p^2}\\n\\\\end{align}\\n$$\\nExample. Actual and simulated pmf for Geometric(p=0.4).\\n\",\"targets\":\"sims = RV(Geometric(p=0.4)).sim(10000)\\nsims.plot()\\nGeometric(p=0.4).plot()\\n\\nprint('The simulated mean is: {:.2f}'.format(sims.mean()))\\nprint('The true mean is: {:.2f}'.format(Geometric(p=0.4).mean()))\\n\\nprint('The simulated variance is: {:.2f}'.format(sims.var()))\\nprint('The true variance is: {:.2f}'.format(Geometric(p=0.4).var()))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Como v\u00e9is, en la gr\u00e1fica anterior hay varios problemas pero como esta es una MicroEntrada solo nos vamos a centrar en el problema de las etiquetas repetidas en la leyenda.\\n\u00bfC\u00f3mo podr\u00edamos evitar el meter tantas veces una etiqueta repetida?\\nMi problema es que el bucle es o podr\u00eda ser 'infinito' y tengo que inicializar las etiquetas de alguna forma. Si miro en esta respuesta encontrada en Stackoverflow dice que en la documentaci\u00f3n se indica que \\\"If label attribute is empty string or starts with \u201c_\u201d, those artists will be ignored.\\\" pero si busco aqu\u00ed o en el enlace que indican en la respuesta en Stackoverflow no veo esa funcionalidad indicada en ning\u00fan sitio. Eso es porque aparec\u00eda en la versi\u00f3n 1.3.1 pero luego desapareci\u00f3... Sin embargo podemos seguir usando esa funcionalidad aunque actualmente no est\u00e9 documentada:\\n\",\"targets\":\"plt.figure(figsize = (12, 6))\\nfor i in range(10):\\n x = np.arange(i * 10, i * 10 + 10)\\n y_var1 = np.random.randint(1, 5, 10)\\n y_var2 = np.random.randint(5, 8, 10)\\n plt.plot(x, y_var1, color = 'k', label = 'variable1' if i == 0 else \\\"_esto_no_se_pintar\u00e1\\\")\\n plt.plot(x, y_var2, color = 'g', label = 'variable2' if i == 0 else \\\"_esto_tampoco\\\")\\n plt.legend()\\n plt.ylim(0, 9)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"
    \\ntimedelta is a duration expressing the difference between two dates.\\n\",\"targets\":\"delta = dt.timedelta(days = 100) # create a timedelta of 100 days\\ndelta\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Almost twice as fast as Numpy!\\nLet's generalize the function to arrays of 2 dimensions\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n@nb.jit('f8[:,:](f8[:,:], f8[:,:], f8[:,:], f8, f8, f8)')\\ndef CobbDouglasVecNB(Y, K, L, A, alpha, beta):\\n for i in range(K.shape[0]):\\n for j in range(K.shape[1]):\\n Y[i,j]=CobbDouglasNB(K[i,j], L[i,j], A, alpha, beta)\\n return Y\\n\\nK=np.random.random((1000,1000))\\nL=np.random.random((1000,1000))\\n%timeit CobbDouglas(K,L,A,alpha,beta)\\n\\nY=np.zeros_like(K)\\n%timeit CobbDouglasVecNB(Y,K,L,A,alpha,beta)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# This is the bed rock, linearily decreasing from 3000m altitude to 1000m, in 200 steps\\nnx = 200\\nbed_h = np.linspace(3400, 1400, nx)\\n# At the begining, there is no glacier so our glacier surface is at the bed altitude\\nsurface_h = bed_h\\n# Let's set the model grid spacing to 100m (needed later)\\nmap_dx = 100\\n\\n# plot this\\nplt.plot(bed_h, color='k', label='Bedrock')\\nplt.plot(surface_h, label='Initial glacier')\\nplt.xlabel('Grid points')\\nplt.ylabel('Altitude (m)')\\nplt.legend(loc='best');\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nBasics\\nSet-up a simple run with a constant linear bed. We will first define the bed:\\nGlacier bed\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"%%R -i workDir\\n\\ninFile = paste(c(workDir, 'ampFrags.txt'), collapse='\\/')\\n\\ntbl = read.delim(inFile, sep='\\\\t')\\ntbl %>% head(n=3)\\n\\n%%R -w 950 -h 650\\n\\nsome.taxa = tbl$taxon_name %>% unique %>% head(n=20)\\n\\ntbl.f = tbl %>% \\n filter(taxon_name %in% some.taxa)\\n\\nggplot(tbl.f, aes(fragGC, fragLength)) +\\n stat_density2d() +\\n labs(x='Fragment G+C', y='Fragment length (bp)') +\\n facet_wrap(~ taxon_name, ncol=5) +\\n theme_bw() +\\n theme(\\n text=element_text(size=16),\\n axis.title.y=element_text(vjust=1)\\n )\\n\\n# re-running simulation with pickled file\\n\\n!cd $workDir; \\\\\\n SIPSim fragments \\\\\\n target_genome_index.txt \\\\\\n --fp $genomeDir \\\\\\n --fr $primerFile \\\\\\n --fld skewed-normal,5000,2000,-5 \\\\\\n --flr None,None \\\\\\n --nf 10000 \\\\\\n --np $nprocs \\\\\\n 2> ampFrags.log \\\\\\n > ampFrags.pkl\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nPlotting fragment length distribution\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"CE5\\/CE-5 frame analysis Bochum.ipynb\\\".\\nThe first task is:\\nAOS frames\\nAOS frames come from spacecraft 91 and virtual channels 1.\\nCan you write Python code for it?\\n\",\"targets\":\"\\naos = [CE5_AOSFrame.parse(f) for f in frames]\\n\\ncollections.Counter([a.primary_header.transfer_frame_version_number for a in aos])\\n\\ncollections.Counter([a.primary_header.spacecraft_id for a in aos\\n if a.primary_header.transfer_frame_version_number == 1])\\n\\ncollections.Counter([a.primary_header.virtual_channel_id for a in aos\\n if a.primary_header.transfer_frame_version_number == 1])\\n\\n[a.primary_header for a in aos if a.primary_header.virtual_channel_id == 1][:10]\\n\\nfc = np.array([a.primary_header.virtual_channel_frame_count for a in aos])\\nt = get_timestamps([a.insert_zone.timestamp for a in aos])\\n\\nplt.figure(figsize = (10,6), facecolor = 'w')\\nplt.plot(t, fc, '.')\\nplt.title(\\\"Chang'e 5 spacecraft 91 virtual channel 1 timestamps\\\")\\nplt.xlabel('AOS frame timestamp')\\nplt.ylabel('AOS virtual channel frame counter');\\n\\nplt.figure(figsize = (10,6), facecolor = 'w')\\nplt.plot(t[1:], np.diff(fc)-1)\\nplt.title(\\\"Chang'e 5 spacecraft 91 virtual channel 1 frame loss\\\")\\nplt.xlabel('AOS frame timestamp')\\nplt.ylabel('Frame loss');\\n\\nvc1_packets = list(ccsds.extract_space_packets(aos, 91, 1, get_timestamps = True))\\n\\nvc1_sp_headers = [ccsds.SpacePacketPrimaryHeader.parse(p[0]) for p in vc1_packets]\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"#ee.display?\\nwavelength_float = [float(i) for i in header['wavelength']]\\nee_axes = {}\\nee_axes['wavelength'] = wavelength_float\\nee_axes['x']='Wavelength, nm'\\nee_axes['y']='Reflectance'\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nSpectral Unmixing allows pixels to be composed of fractions or abundances of each class. \\nEndmembers can be thought of as the basis spectra of an image. Once these endmember spectra are determined, the image cube can be 'unmixed' into the fractional abundance of each material in each pixel (Winter, 1999).\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# Define class for dataset\\n\\nclass toy_set(Dataset):\\n \\n # Constructor with defult values \\n def __init__(self, length = 100, transform = None):\\n self.len = length\\n self.x = 2 * torch.ones(length, 2)\\n self.y = torch.ones(length, 1)\\n self.transform = transform\\n \\n # Getter\\n def __getitem__(self, index):\\n sample = self.x[index], self.y[index]\\n if self.transform:\\n sample = self.transform(sample) \\n return sample\\n \\n # Get Length\\n def __len__(self):\\n return self.len\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\n\\n\\n

    Simple dataset<\\/h2>\\n\\nLet us try to create our own dataset class.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# YOUR CODE HERE\\nraise NotImplementedError()\\n\\nassert 'myfiles.txt' in os.listdir('mydirectory')\\n\\nmyfiles = open('mydirectory\\/myfiles.txt').read()\\nassert 'exercise-02.ipynb' in myfiles\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nUsing ls and output redirection, create a file called myfiles.txt in the directory mydirectory that contains the list of files in the current directory.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"1.1 Each row is the probability predicted by Predictwise that Romney or Obama would win a state. The votes column lists the number of electoral college votes in that state. Use make_map to plot a map of the probability that Obama wins each state, according to this prediction.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n#your code here\\nmake_map(predictwise.Obama, \\\"P(Obama): PredictWise\\\")\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\".ipynb_checkpoints\\/user_apply_job-checkpoint.ipynb\\\".\\nThe first task is:\\nDates\\/duration of all applications:\\nCan you write Python code for it?\\n\",\"targets\":\"\\napps_with_duration = calDuration(apps)\\n\\napps_with_duration.head()\\n\\nall_res = pd.merge(apps_with_duration, agg_apps, left_index=True, right_on=['uid', 'job_title'])\\nall_res.sort_values(by='uid', inplace=True)\\nall_res = all_res[['uid', 'job_title', 'n_apply', 'first_apply_date', 'last_apply_date', 'n_active_day', 'total_duration_in_day']]\\n\\nall_res.head()\\n\\nall_res.shape\\n\\nall_res.to_csv(AGG_DIR + 'timed_apps.csv', index=False)\\n\\nnormal = all_res.query('n_apply < 100')\\nextremes = res\\n\\nplt.figure(figsize=(10,6))\\nplt.subplot(1,2,1)\\nplt.hist(extremes['n_active_day'], bins=np.unique(extremes['n_active_day']))\\nplt.title('Extreme cases')\\nplt.xlabel('# active days')\\nplt.ylabel('# user-apply-job cases')\\n\\nplt.subplots_adjust(wspace=.5)\\nplt.subplot(1,2,2)\\nplt.hist(normal['n_active_day'], bins=np.unique(normal['n_active_day']), \\n log=True)\\nplt.title('Normal cases')\\nplt.xlabel('# active days')\\nplt.ylabel('# user-apply-job cases')\\n\\nplt.savefig(RES_DIR + 'n_active_day.pdf')\\nplt.show()\\nplt.close()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"2.3\\n\",\"targets\":\"def prob3(x):\\n return np.exp(x \\/ 4) - x\\nx = np.linspace(0,3, 100)\\n\\ni = opt.root(prob3, x0=0)\\n\\nplt.plot(x, np.exp(x \\/ 4))\\nplt.plot(x, x)\\nplt.plot(i.x, i.x, 'ro')\\nplt.show()\\nprint(\\\"the solution is {:.2f}\\\".format(i.x[0]))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"4. Relationships between closures <\\/a>\\nNow will will shift back away from single column visualizations, and start to compare values between columns, looking specifically at the different closures in our dataframe. Run the cell below that will automate some of plotting for us.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\ndef plot_with_equality_line(xs, ys, best_fit=False):\\n fig, ax = plt.subplots()\\n sns.regplot(xs, ys, fit_reg=best_fit, ax=ax)\\n\\n lims = [np.min([ax.get_xlim(), ax.get_ylim()]), np.max([ax.get_xlim(), ax.get_ylim()])]\\n ax.plot(lims, lims, '--', alpha=0.75, zorder=0, c='black')\\n ax.set_xlim(lims)\\n ax.set_ylim(lims)\\n \\n print('Points above line: ' + str(sum(xs < ys)))\\n print('Points below line: ' + str(sum(xs > ys)))\\n print('Points on line: ' + str(sum(xs == ys)))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"%%stata\\nxi: dprobit frac_change_00 treatment marketing, robust\\nxi: dprobit frac_change_00 treatment if (treatment == 1 | marketing == 1), robust\\nxi: dprobit frac_change_20 treatment marketing, robust\\nxi: dprobit frac_change_20 treatment if (treatment == 1 | marketing == 1), robust\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nProbit regressions\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# \u05db\u05ea\u05d1\u05d5 \u05d0\u05ea \u05d4\u05e4\u05d5\u05e0\u05e7\u05e6\u05d9\u05d4 \u05e9\u05dc\u05db\u05dd \u05db\u05d0\u05df\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\n

    \u05e8\u05de\u05d6\u05d9\u05dd<\\/p>\\n

    \u05e4\u05d5\u05e0\u05e7\u05e6\u05d9\u05d5\u05ea \u05e9\u05d9\u05de\u05d5\u05e9\u05d9\u05d5\u05ea:\\n
    split<\\/em> \u2013 \u05de\u05ea\u05d5\u05d3\u05d4 \u05e9\u05dc string<\\/em>.
    \\n \u05d4\u05d0\u05d5\u05e4\u05e8\u05d8\u05d5\u05e8 % (\u05de\u05d5\u05d3\u05d5\u05dc\u05d5) \u2013 \u05d7\u05e9\u05d1\u05d5 \u05e2\u05dd \u05d0\u05d9\u05d6\u05d4 \u05de\u05e1\u05e4\u05e8 \u05e6\u05e8\u05d9\u05da \u05dc\u05e2\u05e9\u05d5\u05ea \u05de\u05d5\u05d3\u05d5\u05dc\u05d5.
    \\n zfill<\\/em> \u2013 \u05d4\u05e9\u05ea\u05de\u05e9\u05d5 \u05d1\u05d4 \u05d1\u05de\u05e7\u05e8\u05d4 \u05e9\u05d4\u05e9\u05e2\u05d4 \u05d7\u05d3\u05be\u05e1\u05e4\u05e8\u05ea\u05d9\u05ea (\u05dc\u05d3\u05d5\u05d2\u05de\u05d4 1:05 \u05ea\u05d4\u05e4\u05d5\u05da \u05dc\u05be01:05) <\\/span>\\n\\n

    \u05e8\u05de\u05d6\u05d9\u05dd \u05e0\u05d5\u05e1\u05e4\u05d9\u05dd:
    \\n \u05de\u05d5\u05de\u05dc\u05e5 \u05dc\u05d4\u05de\u05d9\u05e8 \u05d0\u05ea \u05d4\u05e9\u05e2\u05d4 \u05de\u05bestring<\\/em> \u05dc\u05beint<\\/em> \u05d5\u05d0\u05d6 \u05dc\u05d1\u05e6\u05e2 \u05d0\u05ea \u05e4\u05e2\u05d5\u05dc\u05d5\u05ea \u05d4\u05d7\u05e9\u05d1\u05d5\u05df, \u05d5\u05dc\u05d1\u05e1\u05d5\u05e3 \u05dc\u05d4\u05de\u05d9\u05e8 \u05d7\u05d6\u05e8\u05d4 \u05dc\u05bestring<\\/em>\\n <\\/span>\\n<\\/p>\\n\\n

    \u05db\u05e2\u05ea \u05db\u05ea\u05d1\u05d5 \u05e4\u05d5\u05e0\u05e7\u05e6\u05d9\u05d4 \u05d4\u05de\u05e7\u05d1\u05dc\u05ea \u05e9\u05e0\u05d9 \u05e4\u05e8\u05de\u05d8\u05e8\u05d9\u05dd \u2013 \u05e9\u05e2\u05d4 \u05d5\u05d0\u05d6\u05d5\u05e8 \u05d6\u05de\u05df \u05d5\u05d1\u05d0\u05de\u05e6\u05e2\u05d5\u05ea \u05d4\u05e4\u05d5\u05e0\u05e7\u05e6\u05d9\u05d4 \u05de\u05d4\u05ea\u05e8\u05d2\u05d9\u05dc \u05d4\u05e7\u05d5\u05d3\u05dd \u05de\u05d7\u05d6\u05d9\u05e8\u05d4 \u05d0\u05ea \u05d4\u05e9\u05e2\u05d4 \u05d1\u05d0\u05d6\u05d5\u05e8 \u05d4\u05d6\u05de\u05df \u05d4\u05de\u05d1\u05d5\u05e7\u05e9.\\n
    \\n \u05dc\u05d3\u05d5\u05d2\u05de\u05d4:\\n<\\/p>\\n

    \\n convert_to_timezone(\\\"10:34\\\", \\\"TLV\\\")<\\/code>\\n
    \\n \\\"13:34\\\"<\\/samp>\\n<\\/p>\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"%%bigquery\\nCREATE OR REPLACE TABLE\\n babyweight.babyweight_data_train AS\\nSELECT\\n weight_pounds,\\n is_male,\\n mother_age,\\n plurality,\\n gestation_weeks,\\n mother_race\\nFROM\\n babyweight.babyweight_data\\nWHERE\\n ABS(MOD(hashmonth, 4)) < 3\\n\\n%%bigquery\\nCREATE OR REPLACE TABLE\\n babyweight.babyweight_data_eval AS\\nSELECT\\n weight_pounds,\\n is_male,\\n mother_age,\\n plurality,\\n gestation_weeks,\\n mother_race\\nFROM\\n babyweight.babyweight_data\\nWHERE\\n ABS(MOD(hashmonth, 4)) = 3\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nNext, we'll create tables in BigQuery that we'll use for training and evaluation.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# Colors\\ntango_chameleon3 = \\\"#4e9a06\\\"\\ntango_sky3 = \\\"#204a87\\\"\\n\\n#Fonts\\noverlay = {'family' : 'serif',\\n 'color' : 'white',\\n 'weight' : 'normal',\\n 'size' : 55,\\n }\\n\\na=1.5 # scaling factor: figure size\\/font size\\n\\nf = figure(figsize=(3.5*a,1.8*a))\\ngs = GridSpec(100,100)\\n\\n\\naxLL = f.add_subplot(gs[:,0:43])\\nim = axLL.imshow( T6_comp[::-1], cmap=cm.coolwarm, extent=zoomarea, aspect=1 ) # drawing the function\\n# adding the Contour lines with labels\\nstalling = contour( XX,YY, V6_comp, [0], linewidths=1.4,linestyles=\\\"-\\\",colors='white')\\n\\nFDR = contour( XX,YY, T6_comp , [1], linewidths=1,linestyles=\\\"--\\\",colors=tango_chameleon3)\\nfor c in FDR.collections:\\n c.set_dashes([(0, (6, 3))])\\nNDR = contour( XX,YY, T6_comp , [0], linewidths=1,linestyles=\\\"--\\\",colors=tango_sky3)\\nfor c in NDR.collections:\\n c.set_dashes([(0, (1.0, 1.0))])\\n\\n#axLL.set_title(r'Liepelt, Lipowsky,' '\\\\n' r'Phys. Rev. Lett. \\\\textbf{98} (2007)', fontsize=5)\\ntext(.5, 27, r\\\"Liepelt \\\\& Lipowsky\\\" '\\\\n' r\\\"Phys.~Rev.~Lett.~\\\\textbf{98} (2007)\\\", fontdict=overlay,fontsize=6,color='black') # u'Un\u00edc\u00f4d\u00e8 \u222btri\u014bs'\\n\\n\\nxlabel('$f$', labelpad = 2)\\nylabel(r'$\\\\Delta\\\\mu$', labelpad=2) #labelpad: move label closer to axis\\n\\nim.set_clim(vmin=-2, vmax=2)\\n\\n\\naxLa = f.add_subplot(gs[:,32:100])\\nim = axLa.imshow( TLa_comp [::-1], cmap=cm.coolwarm, extent=zoomarea, aspect=1 ) # drawing the function\\n# adding the Contour lines with labels\\nstalling = contour( XX,YY, VLa_comp, [0], linewidths=1.4,linestyles=\\\"-\\\",colors='white')\\nFDR = contour( XX,YY, TLa_comp , [1], linewidths=1,linestyles=\\\"--\\\",colors=tango_chameleon3)\\nfor c in FDR.collections:\\n c.set_dashes([(0, (6, 3))])\\nNDR = contour( XX,YY, TLa_comp , [0], linewidths=1,linestyles=\\\"--\\\",colors=tango_sky3)\\nfor c in NDR.collections:\\n c.set_dashes([(0, (1.0, 1.0))])\\ntext(.5, 27, r\\\"Lau, Lacoste \\\\& Mallick\\\" '\\\\n' r\\\"Phys.~Rev.~Lett.~\\\\textbf{99} (2007)\\\", fontdict=overlay,fontsize=6,color='black') # u'Un\u00edc\u00f4d\u00e8 \u222btri\u014bs'\\n\\n## Modify labels\\nsetp( axLa.get_yticklabels(), visible=False) # hide y labels on La...\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nCode that generates Fig. 10 in Ref. [1]\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"8. Identify and remove a cutpoint from this graph and re-draw it\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n# 4 is a cutpoint\\nG.remove_node(4)\\n\\ndraw(G, graphviz_layout)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"CS231n\\/assignment3\\/NetworkVisualization-TensorFlow.ipynb\\\".\\nThe first task is:\\nFooling Images\\nWe can also use image gradients to generate \\\"fooling images\\\" as discussed in [3]. Given an image and a target class, we can perform gradient ascent over the image to maximize the target class, stopping when the network classifies the image as the target class. Implement the following function to generate fooling images.\\n[3] Szegedy et al, \\\"Intriguing properties of neural networks\\\", ICLR 2014\\nCan you write Python code for it?\\n\",\"targets\":\"\\ndef make_fooling_image(X, target_y, model):\\n \\\"\\\"\\\"\\n Generate a fooling image that is close to X, but that the model classifies\\n as target_y.\\n\\n Inputs:\\n - X: Input image, of shape (1, 224, 224, 3)\\n - target_y: An integer in the range [0, 1000)\\n - model: Pretrained SqueezeNet model\\n\\n Returns:\\n - X_fooling: An image that is close to X, but that is classifed as target_y\\n by the model.\\n \\\"\\\"\\\"\\n X_fooling = X.copy()\\n learning_rate = 1\\n ##############################################################################\\n # TODO: Generate a fooling image X_fooling that the model will classify as #\\n # the class target_y. Use gradient ascent on the target class score, using #\\n # the model.classifier Tensor to get the class scores for the model.image. #\\n # When computing an update step, first normalize the gradient: #\\n # dX = learning_rate * g \\/ ||g||_2 #\\n # #\\n # You should write a training loop #\\n # # \\n # HINT: For most examples, you should be able to generate a fooling image #\\n # in fewer than 100 iterations of gradient ascent. #\\n # You can print your progress over iterations to check your algorithm. #\\n ##############################################################################\\n pass\\n ##############################################################################\\n # END OF YOUR CODE #\\n ##############################################################################\\n return X_fooling\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# find the rows in the table\\n # slice to skip the header row\\n\\n \\n # loop over the rows\\n\\n\\n # get the table cells in the row\\n\\n \\n # assign them to variables\\n\\n \\n # use the .string attribute to get the text in the cell\\n\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nLooping over the table rows\\nLet's print a list of track numbers and song titles. Look at the structure of the table -- a table has rows represented by the tag tr, and within each row there are cells represented by td tags. The find_all() method returns a list. And we know how to iterate over lists: with a for loop. Let's do that.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Diagnostics.ipynb\\\".\\nThe first task is:\\nLogistic Regression\\nCan you write Python code for it?\\n\",\"targets\":\"\\n# Initialize and fit the model.\\n\\nlr = LogisticRegression()\\n\\n#Tune parameters\\n\\nk1 = np.arange(20)+1\\nk2 = ['l1','l2']\\n\\nparameters = {'C': k1,\\n 'penalty':k2\\n }\\n\\n#Fit parameters\\n\\nlr1 = GridSearchCV(lr, param_grid=parameters, cv=kf)\\n\\n#Fit the tunned classifier in the traiing space\\nlr1.fit(X_train, y_train)\\n\\n#Print the best parameters\\nprint(lr1.best_params_) \\n\\n#Have a raw idea of the accuracy of each of the feeatures selection carried out with different methodologies\\nlr1.fit(XPCA, Y)\\n\\n# Predict on test set\\npredPCA_y = lr1.predict(XPCA)\\n\\nprint((\\n 'PCA accuracy: {}\\\\n'\\n 'RFE accuracy: {}\\\\n'\\n 'FI accuracy: {}\\\\n'\\n).format(cross_val_score(lr1,XPCA,Y,cv=kf).mean(),cross_val_score(lr,X2,Y,cv=kf).mean(),cross_val_score(lr,X3,Y,cv=kf).mean()))\\n\\n#Fit on Test set\\nlr1.fit(X_test, y_test)\\n\\npredtest_y = lr1.predict(X_test)\\n\\n#Evaluate model (test set)\\ntarget_names = ['0.0', '1.0']\\nprint(classification_report(y_test, predtest_y, target_names=target_names))\\n\\nconfusion = confusion_matrix(y_test, predtest_y)\\nprint(confusion)\\n\\n# Accuracy tables.\\ntable_test = pd.crosstab(y_test, predtest_y, margins=True)\\n\\ntest_tI_errors = table_test.loc[0.0,1.0] \\/ table_test.loc['All','All']\\ntest_tII_errors = table_test.loc[1.0,0.0] \\/ table_test.loc['All','All']\\n\\n\\nacclr1 = cross_val_score(lr1,X_test,y_test,cv=kf).mean()\\nacclr1pca = cross_val_score(lr1,XPCA,Y,cv=kf).mean()\\n\\nprint((\\n 'Logistic Regression accuracy: {}\\\\n'\\n 'Logistic Regression accuracy PCA: {}\\\\n'\\n 'Percent Type I errors: {}\\\\n'\\n 'Percent Type II errors: {}\\\\n\\\\n'\\n).format(acclr1,acclr1pca,test_tI_errors, test_tII_errors))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"20151103_PrattCriticalGeography\\/API_Example.ipynb\\\".\\nThe first task is:\\nTest the result\\nCan you write Python code for it?\\n\",\"targets\":\"\\nprint url\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Check out your predictions\\nHere, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nfig, ax = plt.subplots(figsize=(16,8))\\n\\nmean, std = scaled_features['cnt']\\npredictions = network.run(test_features).T*std + mean\\nax.plot(predictions[0], label='Prediction')\\nax.plot((test_targets['cnt']*std + mean).values, label='Data')\\nax.plot(((test_features['weekday_0'] + test_features['weekday_6'])*std + mean).values, label='Weekend')\\nax.plot((test_features['holiday']*std + mean).values, label='Holiday')\\nax.set_xlim(right=len(predictions))\\nax.legend()\\n\\ndates = pd.to_datetime(rides.iloc[test_data.index]['dteday'])\\ndates = dates.apply(lambda d: d.strftime('%b %d'))\\nax.set_xticks(np.arange(len(dates))[12::24])\\n_ = ax.set_xticklabels(dates[12::24], rotation=45)\\nax.set_ylim((-100,1000));\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"## head and tail methods\\n\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nThere are usually too many rows to print to the screen and you don't really want to find out by printing them all to the screen and regretting it. By default, when you type the name of the DataFrame and run a cell, pandas knows not to print the whole thing if there are a lot of rows. Instead, you will see the first and last few rows with dots in between. \\nA neater way to look at a preview of the dataset is by using the head() method. Calling DataFrame.head() will displace the first 5 rows of the data (this is also an exmaple of the \\\"dot\\\" notation where we want the \\\"head\\\" gapminder, so we write DataFrame.head()). You can specify how many rows you want to see as an argument, like DataFrame.head(10). The tail() method does the same with the last rows of the DataFrame. \\nUse these methods below to get an idea of what the gapminder DataFrame looks like.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Lab 2 - Quantum States.ipynb\\\".\\nThe first task is:\\nExample 2) Converting from ket to bra:\\nCan you write Python code for it?\\n\",\"targets\":\"\\npsi = Qobj([[1+1j],[2-1j]])\\npsi\\n\\npsi.dag()\\n\\npsi.dag().dag()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# Get document topics\\nall_topics = model.get_document_topics(corpus, minimum_probability=0)\\nall_topics[0]\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nYou can refer to this notebook also before training the LDA model. It contains tips and suggestions for pre-processing the text data, and how to train the LDA model to get good results.\\nDoc-Topic distribution\\nNow we will use get_document_topics which infers the topic distribution of a document. It basically returns a list of (topic_id, topic_probability) for each document in the input corpus.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"CMS_massplot.ipynb\\\".\\nThe first task is:\\nTry editing the number of bins or bin range in the previous code cell. To re-exectue the code, click the play icon in the toolbar or press SHIFT + ENTER.\\nPart 2: Hunt for a particle\\nTry to create a new histogram to show the production of one of the following particles: J\\/$\\\\Psi$, Upsilon ($\\\\Upsilon$), or Z.\\nYou can edit the cell above or paste the code into the empty cell below.\\nPart Three\\nTry selecting a subset of the events to analyze. This is called \\\"applying cuts\\\" to your data. Below are a few examples you may find useful.\\nCan you write Python code for it?\\n\",\"targets\":\"\\n# create a new data set of only the events containing oppositely charges particles\\ndata2 = data[data.Q1 != data.Q2] # change != to == for same charge\\n\\n# create a new data set of only events in a certain mass range\\ndata3 = data[(data.M > 50) & (data.M < 80)] # this choses 50 to 80 GeV\\n\\n# make a scatterplot of two columns\\n# plt.scatter(x_column, y_column, s=point_size, other parameters)\\nplt.scatter(data.eta1, data.phi1, s=.001)\\n\\n# make your plots look like they're from xkcd.com\\nplt.xkcd()\\n\\n# plt.hist can stack two histograms\\nd1 = data[data.Q1 == data.Q2]\\nd2 = data[data.Q1 != data.Q2]\\n\\nfig = plt.figure(figsize=(10, 5))\\nplt.hist([d1.M, d2.M], range=[2,5], stacked=True, label=[\\\"events with same Q\\\",\\\"events with opp Q\\\"], bins=20, log=True)\\nplt.title(\\\"Cutting on net charge\\\")\\nplt.xlabel(\\\"mass (GeV)\\\")\\nplt.ylabel(\\\"log number of events\\\")\\nplt.legend()\\n\\n# to make normal-looking plots again\\nmpl.rcParams.update(inline_rc)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"4. Label the CohortPeriod for each CohortGroup\\nWe want to look at how each cohort has behaved in the months following their first purchase, so we'll need to index each cohort to their first purchase month. For example, CohortPeriod = 1 will be the cohort's first month, CohortPeriod = 2 is their second, and so on.\\nThis allows us to compare cohorts across various stages of their lifetime.\\n\",\"targets\":\"def cohort_period(df):\\n \\\"\\\"\\\"\\n Creates a `CohortPeriod` column, which is the Nth period based on the user's first purchase.\\n \\n Example\\n -------\\n Say you want to get the 3rd month for every user:\\n df.sort(['UserId', 'OrderTime', inplace=True)\\n df = df.groupby('UserId').apply(cohort_period)\\n df[df.CohortPeriod == 3]\\n \\\"\\\"\\\"\\n df['CohortPeriod'] = np.arange(len(df)) + 1\\n return df\\n\\ncohorts = cohorts.groupby(level=0).apply(cohort_period)\\ncohorts.head()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Getting Started With Badfish.ipynb\\\".\\nThe first task is:\\nWe need to convert the Pandas dataframe to Badfish's missframe.\\nCan you write Python code for it?\\n\",\"targets\":\"\\nmf = bf.MissFrame(df)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Lesson_05_python_idioms_and_style.ipynb\\\".\\nThe first task is:\\nUnpacking with wildcard \\\"*\\\"\\nCan you write Python code for it?\\n\",\"targets\":\"\\nseq = [\\\"a\\\", \\\"b\\\", \\\"c\\\", \\\"d\\\", \\\"e\\\", \\\"d\\\"]\\nstart, *middle, end = seq\\nprint(start)\\nprint(middle)\\nprint(end)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"2. Authentication\\nYou only need your Algorithmia API Key to run the following commands.\\n\",\"targets\":\"API_KEY = 'YOUR_API_KEY'\\n# Create a client instance\\nclient = Algorithmia.client(API_KEY)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Compiling Expressions\\nFor efficiency, sympy.lambdify is preferred for numerical analysis. It supports mathematical functions from math, sympy.Function, or mpmath. Since these library functions are compiled Python, C, or even Fortran, they are significantly faster than sympy.evalf.\\n\",\"targets\":\"rad=np.linspace(0, np.pi, 8+1)\\nf = sympy.lambdify(ins, eqn[x], 'numpy')\\n%timeit f(1,0,rad)\\n\\n%%timeit \\nfor i in rad: # evalf doesn't support arrays!\\n eqn[x].evalf(subs={'x':1.0,'y':0.0,'theta':i})\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\".ipynb_checkpoints\\/NB1_IdentifyProblem+DataClean-checkpoint.ipynb\\\".\\nThe first task is:\\nFrom the above results, from the 33, variables,column id number 1 is an integer diagnosis 569 non-null object. and rest are float. More on python variables\\nCan you write Python code for it?\\n\",\"targets\":\"\\n#check for missing variables\\n#data.isnull().any()\\n\\ndata.diagnosis.unique()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Data Preprocessing\\nMap attack field to attack class\\nNSL-KDD dataset has 42 attributes for each connection record including class label containing attack types. The attack types are categorized into four attack classes as described by Mahbod Tavallaee et al. in A Detailed analysis of the KDD CUP 99 Data Set as:\\n1. Denial of Service (DoS): is an attack in which an adversary directed a deluge of traffic requests to a system in order to make the computing or memory resource too busy or too full to handle legitimate requests and in the process, denies legitimate users access to a machine.\\n2. Probing Attack (Probe): probing network of computers to gather information to be used to compromise its security controls. \\n3. User to Root Attack (U2R): a class of exploit in which the adversary starts out with access to a normal user account on the system (gained either by sniffing passwords, a dictionary attack, or social engineering) and is able to exploit some vulnerability to gain root access to the system. \\n4. Remote to Local Attack (R2L): occurs when an attacker who has the ability to send packets to a machine over a network but who does not have an account on that machine exploits some vulnerability to gain local access as a user of that machine.\\n\",\"targets\":\"mapping = {'ipsweep': 'Probe','satan': 'Probe','nmap': 'Probe','portsweep': 'Probe','saint': 'Probe','mscan': 'Probe',\\n 'teardrop': 'DoS','pod': 'DoS','land': 'DoS','back': 'DoS','neptune': 'DoS','smurf': 'DoS','mailbomb': 'DoS',\\n 'udpstorm': 'DoS','apache2': 'DoS','processtable': 'DoS',\\n 'perl': 'U2R','loadmodule': 'U2R','rootkit': 'U2R','buffer_overflow': 'U2R','xterm': 'U2R','ps': 'U2R',\\n 'sqlattack': 'U2R','httptunnel': 'U2R',\\n 'ftp_write': 'R2L','phf': 'R2L','guess_passwd': 'R2L','warezmaster': 'R2L','warezclient': 'R2L','imap': 'R2L',\\n 'spy': 'R2L','multihop': 'R2L','named': 'R2L','snmpguess': 'R2L','worm': 'R2L','snmpgetattack': 'R2L',\\n 'xsnoop': 'R2L','xlock': 'R2L','sendmail': 'R2L',\\n 'normal': 'Normal'\\n }\\n\\n# Apply attack class mappings to the dataset\\ndfkdd_train['attack_class'] = dfkdd_train['attack'].apply(lambda v: mapping[v])\\ndfkdd_test['attack_class'] = dfkdd_test['attack'].apply(lambda v: mapping[v])\\n\\n# Drop attack field from both train and test data\\ndfkdd_train.drop(['attack'], axis=1, inplace=True)\\ndfkdd_test.drop(['attack'], axis=1, inplace=True)\\n\\n# View top 3 train data \\ndfkdd_train.head(3)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"4. The moving Z-score model\\n\",\"targets\":\"from graphlab.toolkits.anomaly_detection import moving_zscore\\n\\nzscore_model = moving_zscore.create(okla_daily, feature='count',\\n window_size=30,\\n min_observations=15)\\n\\nprint zscore_model\\n\\nzscore_model.scores.tail(3)\\n\\nzscore_model.scores.head(3)\\n\\nanomaly_scores['outlier_score'] = zscore_model.scores['anomaly_score']\\nanomaly_scores.tail(5).print_rows()\\n\\nfig, ax = plt.subplots(2, sharex=True)\\nax[0].plot(anomaly_scores['time'], anomaly_scores['count'], color='dodgerblue')\\nax[0].set_ylabel('# quakes')\\n\\nax[1].plot(anomaly_scores['time'], anomaly_scores['outlier_score'], color='orchid')\\nax[1].set_ylabel('outlier score')\\n\\nax[1].set_xlabel('Date')\\nfig.autofmt_xdate()\\nfig.show()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"(3) If-elif-else statement:\\n\",\"targets\":\"a = 3\\nb = 5\\n\\nif a > b:\\n print('a is bigger than b')\\nelif a < b:\\n print('a is smaller than b')\\nelse:\\n print('a is equal to b')\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nfrom sklearn.neighbors import KNeighborsRegressor\\nfrom sklearn import grid_search\\n\\nknn = KNeighborsRegressor(n_neighbors=6,weights='distance')\\n\\nparameters = {'weights':('distance','uniform'),'algorithm':('auto', 'ball_tree', 'kd_tree', 'brute')}\\ngrid_knn = grid_search.GridSearchCV(knn, parameters,n_jobs=2)\\n\\n%%time\\ngrid_knn.fit(X_train,y_train)\\n\\nprint(grid_knn.grid_scores_)\\nprint(\\\"Best: \\\",grid_knn.best_params_)\\n\\nknn = grid_knn.best_estimator_\\n\\nknn.fit(X_train,y_train)\\n\\nerr = (knn.predict(X_test)-y_test)**2\\nerr.sum()\\/len(err)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# Utilities\\n\\nname = ['Oszilloskop', 'Zeitmesser', 'Funktionsgenerator', 'Verst\u00e4rker', 'Vakuumpumpe', 'Netzger\u00e4t', 'Temperaturmessger\u00e4t']\\nmanufacturer = ['LeCroy', 'Keithley', 'HP', 'WicTronic', 'Pfeiffer', ' ', ' ']\\ndevice = ['9631 Dual 300MHz Oscilloscope 2.5 GS\\/s', '775 Programmable Counter\\/Timer', '33120A 15MHz Waveform Generator', 'Zweikanalverst\u00e4rker', 'Vacuum', ' ', ' ']\\n\\nutilities_tbl = PrettyTable(\\n list(zip(name, manufacturer, device)),\\n label='tab:utilities',\\n caption='Verwendete Ger\u00e4tschaften',\\n extra_header=[\\n 'Funktion',\\n 'Hersteller',\\n 'Ger\u00e4tename',\\n ], entries_per_column=7)\\nutilities_tbl.show()\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nVerwendete Messger\u00e4te\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"06\\/Dark Sky Forecast_Homework_6_Skinner.ipynb\\\".\\nThe first task is:\\nThe time zone of Trogen is correct! This is where I live.\\nCan you write Python code for it?\\n\",\"targets\":\"\\nprint(\\\"Longitude:\\\", weather_data['longitude'], \\\"Latitude\\\", weather_data['latitude'])\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Building the graph\\nFrom Chris McCormick's blog, we can see the general structure of our network.\\n\\nThe input words are passed in as one-hot encoded vectors. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.\\nThe idea here is to train the hidden layer weight matrix to find efficient representations for our words. This weight matrix is usually called the embedding matrix or embedding look-up table. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.\\nI'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.\\n\\nExercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.\\n\",\"targets\":\"train_graph = tf.Graph()\\nwith train_graph.as_default():\\n inputs = tf.placeholder(tf.int32, (None,))\\n labels = tf.placeholder(tf.int32, (None, None))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"! .\\/convert_checkpoint_to_edgetpu_tflite.sh --network_type mobilenet_v1 --checkpoint_num $NUM_TRAINING_STEPS\\n\\n! curl https:\\/\\/packages.cloud.google.com\\/apt\\/doc\\/apt-key.gpg | sudo apt-key add -\\n\\n! echo \\\"deb https:\\/\\/packages.cloud.google.com\\/apt coral-edgetpu-stable main\\\" | sudo tee \\/etc\\/apt\\/sources.list.d\\/coral-edgetpu.list\\n\\n! sudo apt-get update\\n\\n! sudo apt-get install edgetpu-compiler\\t\\n\\n%cd transfer_learn\\/models\\/\\n\\n! ls\\n\\n! edgetpu_compiler output_tflite_graph.tflite\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nCompile for the Edge TPU\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"%time hyper_likelihood_all = hyper_likelihood.prod(axis=0)\\nhyper_likelihood_all.sum()\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nWe can multiply this out to get the product of the likelihoods.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# Perform Welch's periodogram\\nsegment = int( srate*5 ) \\nmyhann = signal.get_window('hann', segment)\\n\\n# obtain simply Power (amplitude^2) withouth tappering\\nmyparams = dict(fs = srate, nperseg = segment, window = np.ones(segment), \\n noverlap = 0, scaling = 'spectrum', return_onesided=True)\\nfreq, ps = signal.welch(x = sine, **myparams)# units uV**2\\nps = 2*ps # correct for negative frequencies\\n\\n#obtain Power density (amplitude^2\\/Hz) withouth tappering\\nmyparams = dict(fs = srate, nperseg = segment, window = np.ones(segment), \\n noverlap = 0, scaling = 'density', return_onesided=True)\\nfreq, psd = signal.welch(x = sine, **myparams)# units uV**2\\/Hz\\npsd = 2*psd # correct for negative frequencies\\n\\ndfreq = freq[1]\\nprint('Spectral resolution = %2.4f Hz'%dfreq)\\n\\n# Plot the power spectrum\\n\\nfig, ax = plt.subplots(1, 2, figsize=(16, 4))\\n\\nax[0].set_title(\\\"Amplitude spectrum (Fourier transform)\\\")\\nax[0].stem(hz,amp[:len(hz)], use_line_collection = True)\\nax[0].set_ylabel('Amplitude ($\\\\mu V$)')\\n\\nax[1].set_title(\\\"Power spectrum (Welch's periodogram)\\\")\\nax[1].plot(freq, ps, color='k', lw=2) \\nax[1].set_ylabel('Power spectrum ($\\\\mu V^2$)')\\n\\nfor myax in ax:\\n myax.set_xlabel('Frequency (Hz)')\\n myax.set_xlim(0,20)\\n myticks = list(range(6)) + list(range(10, 25,5))\\n myax.set_xticks(myticks)\\n\\n# Sine wave has an amplitude of 2 uV, the power is 4 uV and the power density is 4*5 if we took \\n# a segment with size = 4x the sampling rate. \\nprint('Signal amplitude = %2.4f uVolts'%amp[int(10\\/dhz)]) \\nprint('Signal power = %2.4f uVolts^2'%ps[int(10\\/dfreq)])\\nprint('Singal power density = %2.4f uVolts^2\\/Hz'%psd[int(10\\/dfreq)])\\n\\nnp.sqrt(124.9824\\/5)\\n\\n# Plot spectrogram\\n# now we will analyze window lenghts of 500 ms \\nsegment = int(0.5*srate) # 500 points (0.5 sec, 500 ms)\\n\\n# we have less frequency resolution here because the signals are smaller\\nNsamples = int( np.floor(segment\\/2) )\\nhz = np.linspace(0, Nyquist, Nsamples + 1)\\ndfreq = hz[1]\\nprint('Spectral resolution = %2.4f Hz'%dfreq)\\n\\nnsteps =...\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nFor the Welch's periodogram, the spectral resolution is given by the lenght of the segment. If we take a segment that correspond to the sampling rate, then the frequency resolution is 1 Hz. A larger segment (e.g., 4 times the size of the sampling rate) would give a higher frequency resolution (0.25 Hz). \\nWelch's periodogram will return the power (amplitude to the square) versus frequency. We can also have the power density. Power density is the power per Hz, and if we take a segment which is four times the sampling rate, then the power will be multiplied by four to return power\\/Hz. In this case, when we add in Welch's, scaling=density and will obtain the power times 4, because our segment has a lenght of four times the sampling rate.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Advanced Db2 JSON Techniques.ipynb\\\".\\nThe first task is:\\nSales by State\\nNow that we can access the data using the JSON macros, we can start doing some analysis on the records. For each of these queries, remember that there are no indexes being used to determine the answer. This first query will show the total sales per state. In order to determine this, we must take the array of purchases by a customer, calculate the total cost of their orders and sum it up for the state.\\nCan you write Python code for it?\\n\",\"targets\":\"\\n%%sql -pb\\nWITH \\n %jsonarray(customers,custno,details,purchases), \\nSALES(state, amount) AS\\n ( \\n SELECT %js(c.details,contact.state,2), sum(%ji(j.item,quantity) * %jd(j.item,item_cost,'7,2')) \\n from jsonarray j, customers c \\n where j.pk = c.custno \\n GROUP BY %js(c.details,contact.state,2) \\n ) \\nSELECT state, sum(amount) from SALES \\nGROUP BY state \\nORDER BY 2 DESC\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# start out with the number of classes for neighbors\\ndata_knn = KNeighborsClassifier(n_neighbors = 10, metric='euclidean')\\ndata_knn\\n\\ndata_knn.fit(x_train, y_train)\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\n5. Building the k Nearest Neighbor Classifier\\n\\nexperiment with different values for neighbors\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"DL0110EN\\/4.1.2xor.ipynb\\\".\\nThe first task is:\\n
    <\\/a>\\n

    One Neuron<\\/h2>\\n\\nCreate a neural network model<\\/code> with one neuron. Then, use the following code to train it:\\nDouble-click here for the solution.\\n\\nCan you write Python code for it?\\n\",\"targets\":\"\\n\\nlearning_rate=0.001\\ncriterion=nn.BCELoss()\\noptimizer=torch.optim.SGD(model.parameters(), lr=learning_rate)\\ntrain_loader=DataLoader(dataset=data_set,batch_size=1)\\nLOSS12=train(data_set,model,criterion, train_loader, optimizer, epochs=500)\\nplot_decision_regions_2class(model,data_set)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"9.2 \u5ea6\u91cf\u7684\u57fa\u7840\\n9.2.1 \u5ea6\u91cf\u7684\u57fa\u7840\u6982\u5ff5\\n9.2.2 \u5ea6\u91cf\u7684\u53ef\u89c6\u5316\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nmetrics.plot_sharp_volatility_cmp()\\n\\ndef sharpe(rets, ann=252):\\n return rets.mean() \\/ rets.std() * np.sqrt(ann)\\n\\nprint('\u7b56\u7565sharpe\u503c\u8ba1\u7b97\u4e3a\uff1d{}'.format(sharpe(metrics.algorithm_returns)))\\n\\nmetrics.plot_effect_mean_day()\\n\\nmetrics.plot_keep_days()\\n\\nmetrics.plot_sell_factors()\\n\\nmetrics.plot_max_draw_down()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# Style your timeseries chart \\nt = TimeSeries(df, x='datetime', xlabel='time', ylabel='Anomaly(\u00baC)',\\n xgrid = False, ygrid=True, tools=False, width=950, height=300,\\n title=\\\"Temperature Anomaly(\u00baC) Monthly Average\\\", palette=[\\\"grey\\\"])\\n\\n# Show new chart\\nshow(t)\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nExercise: Style your plot\\nIdeas:\\n\\nAdd a title\\nAdd axis labels\\nChange width and height\\nDeactivate toolbox or customize available tools\\nChange line color\\n\\nCharts arguments can be found: http:\\/\\/bokeh.pydata.org\\/en\\/latest\\/docs\\/user_guide\\/charts.html#generic-arguments\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"A useful metric for a taxi driver to compare tips is the percentage of tip given with respect to the total fare amount.\\n\",\"targets\":\"# check if the fares and fees sum up to total_amount\\nprint pd.concat([df.tip_amount + df.fare_amount + df.tolls_amount + \\\\\\n df.extra + df.mta_tax + df.improvement_surcharge, \\\\\\n df.total_amount], axis=1).head()\\n\\n# calculate tip percentage\\ndf['total_fare'] = df.total_amount - df.tip_amount\\ndf['tip_percentage'] = df.tip_amount \\/ df.total_fare * 100\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Arrays can also be indexed using mask indexing:\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nprint a[a < 5]\\nprint a[a % 2 == 0]\\nprint a[((a > 1) & (a < 7))]*2\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Lecture-numpy.ipynb\\\".\\nThe first task is:\\nMatrix algebra\\nWhat about matrix mutiplication? There are two ways. We can either use the dot function, which applies a matrix-matrix, matrix-vector, or inner vector multiplication to its two arguments:\\nCan you write Python code for it?\\n\",\"targets\":\"\\nnp.dot(A, A)\\n\\nnp.dot(A, v1)\\n\\nnp.dot(v1, v1)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# We are setting up a query for items in the SHARE dataset that have the keyword \\\"frogs\\\"\\nbasic_query = {\\n \\\"query\\\": {\\n \\\"query_string\\\": {\\n \\\"query\\\": \\\"frogs\\\"\\n }\\n }\\n}\\n\\n#this puts the results of querying SHARE_API with what we outlined in basic_query (frogs)\\nquery_results = query_share(SHARE_API, basic_query)\\n\\n#print out the numbered list of the results\\nprint_numbered_results(query_results)\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nComplex Queries\\nPagination\\nOne request to the SHARE API will return just 10 results by default. To get more results, you can use the \\\"size\\\" parameter in your request, or paginate through the results you get back.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"GBT\\/filterbank_tutorial\\/Filterbank Tutorial (public).ipynb\\\".\\nThe first task is:\\nThere are many useful pieces of information here, which you wil probably use at some point or another. I'll explain the most important ones, but all the header keywords are detailed in blimpy:\\n\\nsource_name: The name of the target observed. In this case it is HIP35136, a star listed in the Hipparcos catalog.\\ntstart: The start date of the observation, in Modified Julian Date format\\nsrc_raj: The right ascension of the target in hours:minutes:seconds.\\nsrc_dej: The declination of the target in degrees:arcminutes:arcseconds.\\nnchans: The number of channels (i.e. frequency bins)\\nfch1: The center frequency of the first channel (i.e. the highest-frequency channel).\\nfoff: The bandwidth of each channel, in MHz. It is negative because the channels are numbered in increasing order as they decrease in frequency.\\n\\n4. Getting Your Hands on the Actual Data\\nNow that I've showed you how to visualize the data and print the header info, I'll show you how to extract the data so that you can use it for analysis purposes.\\nExtracting Frequency Bins\\nFirst let's get ahold of the values of the center frequencies of the channels. This is easy to do:\\nCan you write Python code for it?\\n\",\"targets\":\"\\nfreqs = fil.freqs\\nprint freqs\\nprint type(freqs)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"\\n
    \\n \\n \\\"Colab Run in Colab\\n <\\/a>\\n <\\/td>\\n \\n \\n \\\"GitHubView on GitHub\\n <\\/a>\\n <\\/td>\\n<\\/table>\\n\\nOverview\\nThis tutorial demonstrates AI Platform's CloudTuner service.\\nObjective\\nCloudTuner is implemented into the KerasTuner and uses CAIP Optimizer Beta (Vizier as a backend) as an oracle to get suggested trials, run trials, etc. The usage of CloudTuner is the same as KerasTuner and additionally accept Optimizer's study_config as an alternative input.\\nCosts\\nThis tutorial uses billable components of Google Cloud:\\n\\nAI Platform Training\\nCloud Storage\\n\\nLearn about AI Platform Training\\npricing and Cloud Storage\\npricing, and use the Pricing\\nCalculator\\nto generate a cost estimate based on your projected usage.\\nPIP install packages and dependencies\\nInstall additional dependencies not installed in the notebook environment.\\n\\nUse the latest major GA version of the framework.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n! pip install google-cloud\\n! pip install google-cloud-storage\\n! pip install requests\\n! pip install tensorflow_datasets\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"After fitting PCA to the data, we can read the directions of the new axes (the principal directions) using:\\n\",\"targets\":\"print(pca.components_)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"
    \\n
    \\n \\\"\u05ea\u05e8\u05d2\u05d5\u05dc\\\" \\n <\\/div>\\n
    \\n

    \\n \u05dc\u05de\u05d4 \u05d4\u05dc\u05d5\u05dc\u05d0\u05d4 \u05d4\u05d6\u05d5 \u05d0\u05d9\u05df\u05be\u05e1\u05d5\u05e4\u05d9\u05ea? \u05ea\u05e7\u05e0\u05d5 \u05d0\u05d5\u05ea\u05d4 \u05db\u05da \u05e9\u05ea\u05e4\u05e2\u05dc \u05db\u05e8\u05d0\u05d5\u05d9.\\n <\\/p>\\n <\\/div>\\n<\\/div>\\n\\n

    \u05d4\u05dc\u05d5\u05dc\u05d0\u05d4 \u05dc\u05d0 \u05e8\u05e6\u05d4<\\/p>\\n

    \\n\u05d4\u05de\u05e7\u05e8\u05d4 \u05d4\u05d4\u05e4\u05d5\u05da \u05de\u05dc\u05d5\u05dc\u05d0\u05d4 \u05d0\u05d9\u05df\u05be\u05e1\u05d5\u05e4\u05d9\u05ea \u05d4\u05d5\u05d0 \u05dc\u05d5\u05dc\u05d0\u05d4 \u05e9\u05e4\u05e9\u05d5\u05d8 \u05dc\u05d0 \u05e8\u05e6\u05d4.
    \\n\u05d1\u05de\u05d9\u05dc\u05d9\u05dd \u05d0\u05d7\u05e8\u05d5\u05ea \u2013 \u05dc\u05d5\u05dc\u05d0\u05d4 \u05e9\u05ea\u05e0\u05d0\u05d9 \u05d4\u05db\u05e0\u05d9\u05e1\u05d4 \u05e9\u05dc\u05d4 \u05e9\u05e7\u05d5\u05dc \u05dc\u05beFalse<\\/code> \u05d1\u05d4\u05e8\u05e6\u05ea\u05d4 \u05d4\u05e8\u05d0\u05e9\u05d5\u05e0\u05d4.
    \\n\u05d1\u05de\u05e7\u05e8\u05d4 \u05d4\u05d6\u05d4, \u05e8\u05d0\u05d5\u05d9 \u05dc\u05d1\u05d3\u05d5\u05e7 \u05db\u05d9\u05e6\u05d3 \u05d0\u05ea\u05d7\u05dc\u05e0\u05d5 \u05d0\u05ea \u05d4\u05e1\u05d1\u05d9\u05d1\u05d4, \u05d5\u05d0\u05dd \u05d4\u05ea\u05e0\u05d0\u05d9 \u05e9\u05db\u05ea\u05d1\u05e0\u05d5 \u05d0\u05db\u05df \u05e2\u05d5\u05d1\u05d3.\\n<\\/p>\\n\",\"targets\":\"i = 8\\nwhile i <= 0:\\n print(i)\\nprint(\\\"End of the program\\\")\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# Separate independent and dependent variables\\nX = data.ix[:,1:]\\ny = data['Admit']\\n\\n# Create a logistic regression model\\ninitial = LogisticRegression(C = 1000, random_state=0)\\ninitial.fit(X,y)\\n\\n# Check model accuracy\\nprint(\\\"Accuracy Score:\\\", initial.score(X,y))\\n\\n# What percentage of students actually got into grad school\\nprint(\\\"Actual probability of admission:\\\", y.mean())\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nLogistic Regression\\nWe will use logistic regression to predict the probability that a particular student gets into grad school.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"

    This corresponds to averaging over 13 source vectors rather than one (but has much lower variance as using 13 Gaussian source vectors). A comparison between the convergence behavior of using probing sampler and Gaussian sampler is presented later.<\\/p>\\n\\n

    Then we define LogRationalApproximationCGM<\\/a> operator function class, which internally uses the Eigensolver to compute the Eigenvalues, uses JacobiEllipticFunctions<\\/a> to compute the complex shifts, weights and the constant multiplier in the rational approximation expression, takes the probing vector generated by the trace sampler and then uses CG-M solver (CGMShiftedFamilySolver<\\/a>) to solve the shifted systems. Precompute is not necessary here too.<\\/p>\\n\",\"targets\":\"cgm = sg.CGMShiftedFamilySolver()\\n# setting the iteration limit (set this to higher value for higher condition number)\\ncgm.set_iteration_limit(100)\\n\\n# accuracy determines the number of contour points in the rational approximation (i.e. number of shifts in the systems)\\naccuracy = 1E-15\\n\\n# we create a operator-log-function using the sparse matrix operator that uses CG-M to solve the shifted systems\\nop_func = sg.LogRationalApproximationCGM(op, eigen_solver, cgm, accuracy)\\nop_func.precompute()\\nprint('Number of shifts:', op_func.get_num_shifts())\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# In the tf.keras.layers package, layers are objects. To construct a layer,\\n# simply construct the object. Most layers take as a first argument the number\\n# of output dimensions \\/ channels.\\nlayer = tf.keras.layers.Dense(100)\\n# The number of input dimensions is often unnecessary, as it can be inferred\\n# the first time the layer is used, but it can be provided if you want to \\n# specify it manually, which is useful in some complex models.\\nlayer = tf.keras.layers.Dense(10, input_shape=(None, 5))\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nHigh level API\\nWe recommend using tf.keras as a high-level API for building neural networks. That said, most TensorFlow APIs are usable with eager execution.\\nLayers: common sets of useful operations\\nMost of the time when writing code for machine learning models you want to operate at a higher level of abstraction than individual operations and manipulation of individual variables.\\nMany machine learning models are expressible as the composition and stacking of relatively simple layers, and TensorFlow provides both a set of many common layers as a well as easy ways for you to write your own application-specific layers either from scratch or as the composition of existing layers.\\nTensorFlow includes the full Keras API in the tf.keras package, and the Keras layers are very useful when building your own models.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"! git checkout {WORK_BRANCH}\\n\\n! git fetch {GIT_REMOTE_UPSTREAM}\\n\\n! git rebase {GIT_REMOTE_UPSTREAM}\\/{WORK_BRANCH}\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nEnsure the repo is up to date\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# keyBy\\nx = sc.parallelize([1,2,3])\\ny = x.keyBy(lambda x: x**2)\\nprint(x.collect())\\nprint(y.collect())\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\n\\n\\n<\\/a>\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"GPy\\/basic_kernels.ipynb\\\".\\nThe first task is:\\nbasics: construction, printing and plotting\\nFor most kernels, the input dimension (domain) is the only mandatory parameter to define a kernel object. However, it is also possible to specify the values of the parameters. For example, the three following commands are valid for defining a squared exponential kernel (ie rbf or Gaussian)\\nCan you write Python code for it?\\n\",\"targets\":\"\\nker1 = GPy.kern.RBF(1) # Equivalent to ker1 = GPy.kern.rbf(input_dim=1, variance=1., lengthscale=1.)\\nker2 = GPy.kern.RBF(input_dim=1, variance = .75, lengthscale=2.)\\nker3 = GPy.kern.RBF(1, .5, .5)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"By default logging messages print to the console, but look at\\n:func:mne.set_log_file to save output to a file.\\nAccess raw data\\n^^^^^^^^^^^^^^^\\n\",\"targets\":\"from mne.datasets import sample # noqa\\ndata_path = sample.data_path()\\nraw_fname = data_path + '\\/MEG\\/sample\\/sample_audvis_filt-0-40_raw.fif'\\nprint(raw_fname)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"!python -c \\\"import tensorflow as tf; print(f'TF version: {tf.__version__}')\\\"\\n!python -c \\\"import tfx; print(f'TFX version: {tfx.__version__}')\\\"\\n!python -c \\\"import kfp; print(f'KFP version: {kfp.__version__}')\\\"\\nprint(f\\\"vertex_ai: {vertex_ai.__version__}\\\")\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nValidate lab package version installation\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"! curl https:\\/\\/packages.cloud.google.com\\/apt\\/doc\\/apt-key.gpg | sudo apt-key add -\\n\\n! echo \\\"deb https:\\/\\/packages.cloud.google.com\\/apt coral-edgetpu-stable main\\\" | sudo tee \\/etc\\/apt\\/sources.list.d\\/coral-edgetpu.list\\n\\n! sudo apt-get update\\n\\n! sudo apt-get install edgetpu-compiler\\t\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nInstall the Edge TPU Compiler:\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"DNN_Tutorial.ipynb\\\".\\nThe first task is:\\nConvolutional MLP\\nWe can also have the first layer be a set of small filters which are convolved with the images.\\nTry different parameters and see what happens. (This network might be slow.)\\nCan you write Python code for it?\\n\",\"targets\":\"\\n# Create the network!\\ncnn = Sequential()\\n\\n# First fully connected layer\\ncnn.add(Convolution2D(20, 5, 5, input_shape=(1, im_size, im_size), border_mode='valid', subsample=(2, 2)))\\ncnn.add(Activation('tanh')) # nonlinearity\\nprint('Shape after layer 1: {}'.format(cnn.output_shape))\\n\\n# Take outputs and turn them into a vector\\ncnn.add(Flatten())\\nprint('Shape after flatten: {}'.format(cnn.output_shape))\\n\\n# Fully connected layer\\ncnn.add(Dropout(0.0)) # dropout is currently turned off, you may need to train for more epochs if nonzero\\ncnn.add(Dense(100)) # number of targets, 10 for y, 2 for eo\\ncnn.add(Activation('tanh'))\\n\\n# Second fully connected layer with softmax output\\ncnn.add(Dropout(0.0)) # dropout is currently turned off, you may need to train for more epochs if nonzero\\ncnn.add(Dense(10)) # number of targets, 10 for y, 2 for eo\\ncnn.add(Activation('softmax'))\\n\\n# Adam is a simple optimizer, SGD has more parameters and is slower but may give better results\\n#opt = Adam()\\n#opt = RMSprop()\\nopt = SGD(lr=0.1, momentum=0.9, decay=0.0001, nesterov=True)\\nprint('')\\ncnn.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])\\ncnn.fit(train_im[:, np.newaxis, ...], np_utils.to_categorical(train_y), nb_epoch=20, batch_size=100)\\ntr_score = cnn.evaluate(train_im[:, np.newaxis, ...], np_utils.to_categorical(train_y), batch_size=100)\\nva_score = cnn.evaluate(valid_im[:, np.newaxis, ...], np_utils.to_categorical(valid_y), batch_size=100)\\nprint('')\\nprint('Train loss: {}, train accuracy: {}'.format(*tr_score))\\nprint('Validation loss: {}, validation accuracy: {}'.format(*va_score))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Counterfactual plot\\nSimilar to p140\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n# p(C | do(G), do(P))\\nPfixed = 0\\n\\ndf0 = make_samples(G=0, P=Pfixed, nsamples=200)\\ndisplay(df0.head())\\nCbar0 = df0[\\\"C\\\"].values.mean()\\n\\ndf1 = make_samples(G=1, P=Pfixed, nsamples=200)\\ndisplay(df1.head())\\nCbar1 = df1[\\\"C\\\"].values.mean()\\n\\nprint([Cbar0, Cbar1])\\n\\nsim_dat = dict(G=jnp.array([0, 1]), P=jnp.array(Pfixed))\\npost = mcmc_run.get_samples()\\npred = Predictive(model_logreg, post)(random.PRNGKey(22), **sim_dat)\\nprint(pred[\\\"C\\\"].shape)\\nprint(np.mean(pred[\\\"C\\\"], axis=0))\\n\\na_est = post[\\\"a\\\"].mean()\\nb_PC_est = post[\\\"b_PC\\\"].mean()\\nb_GC_est = post[\\\"b_GC\\\"].mean()\\nP = Pfixed\\n\\nG = np.array([0, 1])\\nlogits = a_est + b_PC_est * P + b_GC_est * G\\nnp.set_printoptions(formatter={\\\"float\\\": lambda x: \\\"{0:0.3f}\\\".format(x)})\\nprint(expit(logits))\\n\\npred\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Analysis\\nDegree histogram\\nReturn a list of the frequency of each degree value; degree values are the index in the list\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n# returns a list of frequencies of degrees\\nprint (\\\"undirected graph =\\\"), nx.degree_histogram(ugraph)\\nprint (\\\"directed graph =\\\"), nx.degree_histogram(graph)\\n\\n# degree rank plot\\n# only for undirected type\\n\\ndegree_sequence=sorted(nx.degree(ugraph).values(),reverse=True) # degree sequence\\n#print \\\"Degree sequence\\\", degree_sequence\\ndmax=max(degree_sequence)\\n\\nplt.loglog(degree_sequence,'b-',marker='o')\\nplt.title(\\\"Degree rank plot\\\")\\nplt.ylabel(\\\"degree\\\")\\nplt.xlabel(\\\"rank\\\")\\n\\n# draw graph in inset\\nplt.axes([0.45,0.45,0.45,0.45])\\nGcc=sorted(nx.connected_component_subgraphs(ugraph), key = len, reverse=True)[0]\\npos=nx.spring_layout(Gcc)\\nplt.axis('off')\\nnx.draw_networkx_nodes(Gcc,pos,node_size=20)\\nnx.draw_networkx_edges(Gcc,pos,alpha=0.4)\\n\\nplt.show()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"3.3. \u5bf9\u4e8e\u591a\u4e2a\u7279\u5f81\uff0c\u54ea\u4e00\u4e2a\u7279\u5f81\u7684\u6700\u4f73\u9608\u503c\u5bf9\u7eaf\u51c0\u5ea6\u63d0\u5347\u6700\u5927\uff1f\\n\u663e\u7136\uff0c\u6700\u66b4\u529b\u7684\u65b9\u6cd5\u662f\uff1a\u6bcf\u6b21\u5206\u5272\uff0c\u6211\u4eec\u7a77\u5c3d\u6240\u6709\u7279\u5f81\uff0c\u5373\u53ef\u627e\u5230\u5bf9\u6b64\u8282\u70b9\u6700\u4f73\u5206\u5272\u70b9\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\ndef find_best_split(node, measure):\\n if node[\\\"target\\\"].unique().shape[0] <= 1:\\n return None\\n \\n purity_gain = pd.Series([], name=\\\"feature\\\")\\n \\n for f in node.drop(\\\"target\\\", axis=1).columns:\\n purity_gain[f] = find_best_threshold(node, f, measure)\\n \\n if pd.isnull(purity_gain.max()):\\n return None\\n else:\\n best_split = {\\\"feature\\\": purity_gain.argmax(), \\\"threshold\\\": purity_gain.max()}\\n best_split[\\\"child\\\"] = splitter(node, **best_split)\\n\\n return best_split\\n\\nbest_split = find_best_split(samples, gini_index)\\n[best_split[x] for x in [\\\"feature\\\", \\\"threshold\\\"]]\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Cloning the repo\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n# Replace them with your personal credentials.\\nemail = \\\"srikarreddy38@gmail.com\\\"\\nusername = \\\"always-newbie161\\\"\\nrepo = \\\"LFS-data\\\" # remote repository.\\n\\n!rm -rf colab_utils.py # remove any old copies of this file\\n!wget https:\\/\\/raw.githubusercontent.com\\/probml\\/pyprobml\\/master\\/scripts\\/colab_utils.py\\nimport colab_utils as utils\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# Problem 1 - Implement Min-Max scaling for grayscale image data\\ndef normalize_grayscale(image_data):\\n \\\"\\\"\\\"\\n Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]\\n :param image_data: The image data to be normalized\\n :return: Normalized image data\\n \\\"\\\"\\\"\\n # TODO: Implement Min-Max scaling for grayscale image data\\n a = 0.1\\n b = 0.9\\n grayscala_min = 0\\n grayscala_max = 255\\n return a + (image_data - grayscala_min) * (b - a) \\/ (grayscala_max - grayscala_min)\\n\\n### DON'T MODIFY ANYTHING BELOW ###\\n# Test Cases\\nnp.testing.assert_array_almost_equal(\\n normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),\\n [0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,\\n 0.125098039216, 0.128235294118, 0.13137254902, 0.9],\\n decimal=3)\\nnp.testing.assert_array_almost_equal(\\n normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),\\n [0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,\\n 0.896862745098, 0.9])\\n\\nif not is_features_normal:\\n train_features = normalize_grayscale(train_features)\\n test_features = normalize_grayscale(test_features)\\n is_features_normal = True\\n\\nprint('Tests Passed!')\\n\\nif not is_labels_encod:\\n # Turn labels into numbers and apply One-Hot Encoding\\n encoder = LabelBinarizer()\\n encoder.fit(train_labels)\\n train_labels = encoder.transform(train_labels)\\n test_labels = encoder.transform(test_labels)\\n\\n # Change to float32, so it can be multiplied against the features in TensorFlow, which are float32\\n train_labels = train_labels.astype(np.float32)\\n test_labels = test_labels.astype(np.float32)\\n is_labels_encod = True\\n\\nprint('Labels One-Hot Encoded')\\n\\nassert is_features_normal, 'You skipped the step to normalize the features'\\nassert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'\\n\\n# Get randomized datasets for training and...\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\n\\nProblem 1\\nThe first problem involves normalizing the features for your training and test data.\\nImplement Min-Max scaling in the normalize_grayscale() function to a range of a=0.1 and b=0.9. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9.\\nSince the raw notMNIST image data is in grayscale, the current values range from a min of 0 to a max of 255.\\nMin-Max Scaling:\\n$\\nX'=a+{\\\\frac {\\\\left(X-X_{\\\\min }\\\\right)\\\\left(b-a\\\\right)}{X_{\\\\max }-X_{\\\\min }}}\\n$\\nIf you're having trouble solving problem 1, you can view the solution here.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"212811\\nWhat are the earliest two films listed in the titles dataframe?\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\ntitles.sort('year').head()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"%%writefile $bashFile\\n#!\\/bin\\/bash\\n\\n#-- R analysis --#\\nexport PATH={R_dir}:$PATH\\n \\n# plotting taxon abundances\\nOTU_taxonAbund.r \\\\\\n OTU_n2_abs{abs}.txt \\\\\\n -r {topTaxaToPlot} \\\\\\n -o OTU_n2_abs{abs}\\n# plotting taxon abundances\\nOTU_taxonAbund.r \\\\\\n OTU_n2_abs{abs}_sub-norm.txt \\\\\\n -r {topTaxaToPlot} \\\\\\n -o OTU_n2_abs{abs}_subsub-norm\\n# running DeSeq2 and making confusion matrix on predicting incorporators\\n## making phyloseq object from OTU table\\nphyloseq_make.r \\\\\\n OTU_n2_abs{abs}_sub-norm_w.txt \\\\\\n -s OTU_n2_abs{abs}_sub-norm_meta.txt \\\\\\n > OTU_n2_abs{abs}_sub-norm.physeq\\n## filtering phyloseq object to just taxa\\/samples of interest\\nphyloseq_edit.r \\\\\\n OTU_n2_abs{abs}_sub-norm.physeq \\\\\\n --BD_min {BD_min} \\\\\\n --BD_max {BD_max} \\\\\\n > OTU_n2_abs{abs}_sub-norm_filt.physeq\\n## making ordination\\nphyloseq_ordination.r \\\\\\n OTU_n2_abs{abs}_sub-norm_filt.physeq \\\\\\n OTU_n2_abs{abs}_sub-norm_bray-NMDS.pdf\\n## DESeq2\\nphyloseq_DESeq2.r \\\\\\n OTU_n2_abs{abs}_sub-norm_filt.physeq \\\\\\n --log2 {log2} \\\\\\n --hypo greater \\\\\\n > OTU_n2_abs{abs}_sub-norm_DESeq2\\n## Confusion matrix\\nDESeq2_confuseMtx.r \\\\\\n {frag_file}_incorp_BD-shift.txt \\\\\\n OTU_n2_abs{abs}_sub-norm_DESeq2 \\\\\\n --padj {padj} \\n\\n!chmod 775 $bashFile\\n\\n!cd $workDir; \\\\\\n nestrun -j 30 --template-file $bashFile -d percIncorpUnifRep --log-file logR.txt\\n\\n# aggregating confusion matrix data\\n## table\\n!cd $workDir; \\\\\\n nestagg delim \\\\\\n -d percIncorpUnifRep \\\\\\n -k percIncorp,rep \\\\\\n -o .\\/percIncorpUnifRep\\/DESeq2-cMtx_table.csv \\\\\\n DESeq2-cMtx_table.csv\\n\\n## overall\\n!cd $workDir; \\\\\\n nestagg delim \\\\\\n -d percIncorpUnifRep\\\\\\n -k percIncorp,rep \\\\\\n -o .\\/percIncorpUnifRep\\/DESeq2-cMtx_overall.csv \\\\\\n DESeq2-cMtx_overall.csv\\n \\n## byClass\\n!cd $workDir; \\\\\\n nestagg delim \\\\\\n -d percIncorpUnifRep \\\\\\n -k percIncorp,rep \\\\\\n -o .\\/percIncorpUnifRep\\/DESeq2-cMtx_byClass.csv \\\\\\n DESeq2-cMtx_byClass.csv\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nR analysis\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"And double-check that we got the shapes the way we wanted. As a reminder, we're working with 375 cities and 14 days.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nprint('Mobility Matrix over time should have shape (375, 375, 14): {}'.format(\\n mobility_matrix_over_time.shape))\\nprint('Observed Infectious should have shape (375, 14): {}'.format(\\n observed_daily_infectious_count.shape))\\nprint('Initial population should have shape (375): {}'.format(\\n initial_population.shape))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Applied Machine Learning in Python\\/Week 1\\/Week1.ipynb\\\".\\nThe first task is:\\nHow sensitive is k-NN classification accuracy to the choice of the 'k' parameter?\\nCan you write Python code for it?\\n\",\"targets\":\"\\nk_range = range(1,20)\\nscores = []\\n\\nfor k in k_range:\\n knn = KNeighborsClassifier(n_neighbors = k)\\n knn.fit(X_train, y_train)\\n scores.append(knn.score(X_test, y_test))\\n\\nplt.figure()\\nplt.xlabel('k')\\nplt.ylabel('accuracy')\\nplt.scatter(k_range, scores)\\nplt.xticks([0,5,10,15,20]);\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"CountVectorizer converts a collection of text documents to a matrix of token counts (part of sklearn.feature_extraction.text).\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nveczr = CountVectorizer(tokenizer=tokenize)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"As the $u$, $v$, $w$ coordinates explicitly depend on $H$, we must evaluate them for each observational time step. We will use the equations defined in $\\\\S$ 4.2.2 ➞:\\n\\n$\\\\lambda u = X \\\\sin H + Y \\\\cos H$\\n$\\\\lambda v= -X \\\\sin \\\\delta \\\\cos H + Y \\\\sin\\\\delta\\\\sin H + Z \\\\cos\\\\delta$\\n$\\\\lambda w= X \\\\cos \\\\delta \\\\cos H -Y \\\\cos\\\\delta\\\\sin H + Z \\\\sin\\\\delta$\\n\",\"targets\":\"u = lam**(-1)*(np.sin(h)*X+np.cos(h)*Y)\\/1e3\\nv = lam**(-1)*(-np.sin(dec)*np.cos(h)*X+np.sin(dec)*np.sin(h)*Y+np.cos(dec)*Z)\\/1e3\\nw = lam**(-1)*(np.cos(dec)*np.cos(h)*X-np.cos(dec)*np.sin(h)*Y+np.sin(dec)*Z)\\/1e3\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"(5 pts) Implement the power method for a given matrix $A$, an initial guess $x_0$ and a number of iterations num_iter. It should be organized as a function power_method(A, x0, num_iter) that outputs approximation to eigenvector $x$, eigenvalue $\\\\lambda$ and history of residuals ${\\\\|Ax_k - \\\\lambda_k x_k\\\\|_2}$. Make sure that the method conveges to the correct solution on a matrix $\\\\begin{bmatrix} 2 & -1 \\\\ -1 & 2 \\\\end{bmatrix}$ which is known to have the largest eigenvalue equal to $3$.\\n\",\"targets\":\"A = np.matrix([[2, -1], [-1, 2]])\\nx0 = np.matrix([0, 1])\\nx, l, res = power_method(A, x0, 10)\\nprint(x, l)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# Test Adam implementation; you should see errors around 1e-7 or less\\nN, D = 4, 5\\ntest_adam = sequential(fc(N, D, name=\\\"adam_fc\\\"))\\n\\nw = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)\\ndw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)\\nm = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)\\nv = np.linspace(0.7, 0.5, num=N*D).reshape(N, D)\\n\\ntest_adam.layers[0].params = {\\\"adam_fc_w\\\": w}\\ntest_adam.layers[0].grads = {\\\"adam_fc_w\\\": dw}\\n\\nopt_adam = Adam(test_adam, 1e-2, 0.9, 0.999, t=5)\\nopt_adam.mt = {\\\"adam_fc_w\\\": m}\\nopt_adam.vt = {\\\"adam_fc_w\\\": v}\\nopt_adam.step()\\n\\nupdated_w = test_adam.layers[0].params[\\\"adam_fc_w\\\"]\\nmt = opt_adam.mt[\\\"adam_fc_w\\\"]\\nvt = opt_adam.vt[\\\"adam_fc_w\\\"]\\n\\nexpected_updated_w = np.asarray([\\n [-0.40094747, -0.34836187, -0.29577703, -0.24319299, -0.19060977],\\n [-0.1380274, -0.08544591, -0.03286534, 0.01971428, 0.0722929],\\n [ 0.1248705, 0.17744702, 0.23002243, 0.28259667, 0.33516969],\\n [ 0.38774145, 0.44031188, 0.49288093, 0.54544852, 0.59801459]])\\nexpected_v = np.asarray([\\n [ 0.69966, 0.68908382, 0.67851319, 0.66794809, 0.65738853,],\\n [ 0.64683452, 0.63628604, 0.6257431, 0.61520571, 0.60467385,],\\n [ 0.59414753, 0.58362676, 0.57311152, 0.56260183, 0.55209767,],\\n [ 0.54159906, 0.53110598, 0.52061845, 0.51013645, 0.49966, ]])\\nexpected_m = np.asarray([\\n [ 0.48, 0.49947368, 0.51894737, 0.53842105, 0.55789474],\\n [ 0.57736842, 0.59684211, 0.61631579, 0.63578947, 0.65526316],\\n [ 0.67473684, 0.69421053, 0.71368421, 0.73315789, 0.75263158],\\n [ 0.77210526, 0.79157895, 0.81105263, 0.83052632, 0.85 ]])\\n\\nprint 'updated_w error: ', rel_error(expected_updated_w, updated_w)\\nprint 'mt error: ', rel_error(expected_m, mt)\\nprint 'vt error: ', rel_error(expected_v, vt)\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nAdam\\nThe update rule of Adam is as shown below: \\n\\\\begin{equation}\\ng_t: gradients\\\\ at\\\\ update\\\\ step\\\\ t \\\\\\nm_t = \\\\beta_1m_{t-1} + (1-\\\\beta_1)g_t \\\\\\nv_t = \\\\beta_2v_{t-1} + (1-\\\\beta_1)g_t^2 \\\\\\n\\\\hat{m_t}: bias\\\\ corrected\\\\ m_t \\\\\\n\\\\hat{v_t}: bias\\\\ corrected\\\\ v_t \\\\\\n\\\\theta_{t+1} = \\\\theta_t - \\\\frac{\\\\eta}{\\\\sqrt{\\\\hat{v_t}}+\\\\epsilon}\\n\\\\end{equation}\\nComplete the Adam() function in optim.py\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\".ipynb_checkpoints\\/A New Treatment for Arthritis-checkpoint.ipynb\\\".\\nThe first task is:\\nNow we need a widget to use this...\\nCan you write Python code for it?\\n\",\"targets\":\"\\nfrom ipywidgets import interact\\n\\ninteract(update_plot_statistic, statistic=[\\\"Average\\\", \\\"Max\\\", \\\"Min\\\"])\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Creating the loop\\nWhile the full loop should also include a controller, we are currently only interested in showcasing the estimator. We therefore estimate the states for an arbitrary initial condition and some random control inputs (shown below).\\n\",\"targets\":\"x0 = np.pi*np.array([1, 1, -1.5, 1, -5, 5, 0, 0]).reshape(-1,1)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"1.4. Basic Approximations\\nIs Required: TRUE    Type: ENUM    Cardinality: 1.N\\nBasic approximations made in the ocean.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n# PROPERTY ID - DO NOT EDIT ! \\nDOC.set_id('cmip6.ocean.key_properties.basic_approximations') \\n\\n# PROPERTY VALUE(S): \\n# Set as follows: DOC.set_value(\\\"value\\\") \\n# Valid Choices: \\n# \\\"Primitive equations\\\" \\n# \\\"Non-hydrostatic\\\" \\n# \\\"Boussinesq\\\" \\n# \\\"Other: [Please specify]\\\" \\nDOC.set_value(\\\"Boussinesq\\\") \\nDOC.set_value(\\\"Primitive equations\\\")\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Central $\\\\chi^2$ and Rayleigh Distribution\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n# parameters of distribution \\nsigma2 = 1 \\n\\n\\n# continuous world and theoretical pdf\\ndelta_x = .001\\n\\nx = np.arange( 0, 10 * np.sqrt(sigma2) + delta_x, delta_x) \\nf_theo_chi2 = 1 \\/ sigma2 \\/ 2 * np.exp( - x \\/ 2 \\/ sigma2 ) \\nf_theo_Ray = x \\/ sigma2 * np.exp( - x**2 \\/ 2 \\/ sigma2 )\\n\\n\\n# sample gaussian\\nN_samples = int( 1e4 )\\nX = np.sqrt( sigma2 ) * np.random.randn( 2, N_samples )\\n\\nX2 = np.sum( X**2, axis = 0 )\\nX_R = np.sqrt( X2 )\\n\\n# plotting\\nplt.subplot(121)\\nplt.plot( x, f_theo_chi2, linewidth=2.0, label='Theo.')\\nplt.hist( X2, 50, density=1, label='Sim.', alpha=0.75) \\n\\nplt.xlabel('$x, n$')\\nplt.ylabel('$f(x), H_{{{}}}(n)$'.format( N_samples ) )\\n\\nplt.grid( True )\\nplt.legend( loc = 'upper right' )\\nplt.title('Central $\\\\chi^2$ Distribution')\\n\\nplt.subplot(122)\\nplt.plot( x, f_theo_Ray, linewidth=2.0, label='Theo.')\\nplt.hist( X_R, 50, density=1, label='Sim.', alpha=0.75) \\n\\nplt.xlabel('$x, n$')\\nplt.ylabel('$f(x), H_{{{}}}(n)$'.format( N_samples ) )\\n\\nplt.grid( True )\\nplt.legend( loc = 'upper right' )\\nplt.title('Rayleigh Distribution' )\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Gracias\\/ConductionLine.ipynb\\\".\\nThe first task is:\\nUsing a 10 inch or 12 inch pipe would potentially leave enough remaining available headloss to use for flow control. \\nFlow Distribution\\nNow the question is about flow distribution. The effect of shutting off one train potentially effects the flow rate of the other. Determining the extent of this effect is a flow distribution problem, much like those done throughout plant design. By studying the various flow paths, one can determine the effect of shutting off a flow path during the worst case scenario. There are several steps to designing the optimal system. First, the goal is to reduce the headloss in the shared line, because that headloss changes when one branch is turned off and the flow rate is halved. As the shared headloss reduces, the leftover headloss is taken up by the remaining line, increasing train flow. The steps to define the optimal pipe configuration are as follows:\\n\\nPipe Length Geometry: make a guess for the ideal pipe geometry, attempting to minimize shared headloss and maximize train branch headloss.\\nHeadloss Calculations: determine minor and major losses throughout the system.\\nSystem Error (Cross-Talk Effect): calculate the effect of cross-talk over a range of flow rates.\\n\\n1. Pipe Length Geometry\\nThe initial pipe design is based on limited knowledge of the site, and is supposed to convey a conservative guess for the condction and distribution line geometry. When a full topography of the site and the two upstream pressure breaks, a more precise design will be made and analyzed. The video below is a rendering of the preliminary design of the conduction and train-distribution system:\\n\\nIn summary, the proposed plan is to augment both lines running from the pressure break to 8\\\" lines. The two lines will immediately plumb into a main 12\\\" conduction line. The main line will run 455 m to the plant site, where it splits at a tee into two 4\\\" lines. The following calculations ensure the cross-talk between the two trains are minimized. \\n2. Headloss Calculations\\nThe headloss in the various components of...\\nCan you write Python code for it?\\n\",\"targets\":\"\\nid_12 = pipe.ID_SDR(12, sdr_pipe)\\n\\n# conservative minor loss coefficient in both lines pressure break to tee:\\nk_value_pressure_break_to_tee_6_inch = exp.K_MINOR_PIPE_ENTRANCE + \\\\\\n exp.K_MINOR_90 + k.k_value_expansion(id_6, id_12, flow_branch)\\nk_value_pressure_break_to_tee_4_inch = exp.K_MINOR_PIPE_ENTRANCE + \\\\\\n exp.K_MINOR_90 + exp.K_MINOR_EL45 + k.k_value_expansion(id_4, id_12, flow_branch)\\n\\nprint(\\\"k value in 6 inch line: \\\" + str(k_value_pressure_break_to_tee_6_inch))\\nprint('k value in 4 inch line: ' + str(k_value_pressure_break_to_tee_4_inch))\\n\\n# conservative pipe lengths from pressure break to tee:\\nl_pressure_break_to_tee_6_inch = 4 * u.m\\nl_pressure_break_to_tee_4_inch = 4 * u.m\\n\\n# determine headloss through both 4\\\" and 6\\\" pipes by defining headloss range:\\nheadloss_range_pressure_break_to_tee = np.linspace(0.1,10,100) * u.m\\n# calculate the added flow rates for all the headlosses in the range:\\nflow_range_pressure_break_to_tee = pc.flow_pipe(id_4, headloss_range_pressure_break_to_tee, \\n l_pressure_break_to_tee_4_inch, exp.NU_WATER, mat.PIPE_ROUGH_PVC,\\n k_value_pressure_break_to_tee_4_inch) + \\\\\\n pc.flow_pipe(id_6, headloss_range_pressure_break_to_tee, \\n l_pressure_break_to_tee_6_inch, exp.NU_WATER, mat.PIPE_ROUGH_PVC,\\n k_value_pressure_break_to_tee_6_inch)\\n\\n# graph of flow rates for various flow rates:\\ndf = pd.DataFrame(np.array(flow_range_pressure_break_to_tee.to(u.L\\/u.s)), \\n index=np.array(headloss_range_pressure_break_to_tee), \\n columns = ['4\\\" and 6\\\" lines'])\\ndf.index.name = 'Headloss (m)'\\ndf.columns.name = 'flow (L\\/s)'\\ndf.name = 'Headloss v. Flow rate for Pressure Break to Tee'\\ndf.plot().set(ylabel=df.columns.name, title=df.name)\\nplt.show()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# Others useful libraries\\nfrom time import time\\n\\n# Second, we write a function that implements the model, solves the LP, \\n# and returns the KW distance along with an optimal transport plan.\\ndef BipartiteDistanceW1_L2(Mu, Nu):\\n t0 = time()\\n # Main Pyomo model\\n model = ConcreteModel()\\n # Parameters\\n model.I = RangeSet(len(Mu.X))\\n model.J = RangeSet(len(Nu.X))\\n # Variables\\n model.PI = Var(model.I, model.J, within=NonNegativeReals) \\n # Objective Function\\n model.obj = Objective(\\n expr=sum(model.PI[i,j] * Cost(Mu.X[i-1], Nu.X[j-1]) for i,j in model.PI))\\n \\n # Constraints on the marginals\\n model.Mu = Constraint(model.I, \\n rule = lambda m, i: sum(m.PI[i,j] for j in m.J) == Mu.W[i-1])\\n model.Nu = Constraint(model.J, \\n rule = lambda m, j: sum(m.PI[i,j] for i in m.I) == Nu.W[j-1])\\n \\n # Solve the model\\n sol = SolverFactory('glpk').solve(model)\\n\\n # Get a JSON representation of the solution\\n sol_json = sol.json_repn()\\n # Check solution status\\n if sol_json['Solver'][0]['Status'] != 'ok':\\n return None\\n if sol_json['Solver'][0]['Termination condition'] != 'optimal':\\n return None\\n\\n return model.obj(), time()-t0\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nSolving the Bipartite model\\nGiven two discrete measures $\\\\mu$ and $\\\\nu$, we can use an implicit definition of the bipartite graph, by defining directly the following function, whichi implements and solves LP model described before.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"#total number of crimes per day in each month for each year\\n#i.e for each day, the average daily temperature was X and there were Y number of crimes.\\ncrime_weather = C_count\\ncrime_daily_C_Temperature = crime_weather.groupby(['Year','Month','Day'])['Primary Type'].count()\\ncrime_daily_C_Temperature = crime_daily_C_Temperature.reset_index()\\ncrime_daily_C_Temperature = crime_daily_C_Temperature.merge(temp,on=['Year','Month','Day'],how='inner')\\ncrime_daily_C_Temperature\\n\\nplt.figure(figsize=(15,15))\\nax = sns.lmplot(x='Mean_Temp',y='Primary Type',hue='Year',data=crime_daily_C_Temperature,scatter=False)\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nRate of crime per year as a function of temperature\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Check statistical significance of the difference in means in Figure SI-1(a)\\nIn the cell below, we find that the means of $F(1000)$ are statistically significantly different between the two models for $F(0) = 0.155, 0.16, 0.165, ..., 0.2$ according to the two-sided Mann-Whitney $U$ test ($p$-value $< 10^{-5}$):\\n\",\"targets\":\"for init_F, df in long_run_results.groupby('init_F'):\\n df_grouped_by_r_xi = df.groupby(['r', 'xi'])\\n print('F(0) = {:>5}'.format(init_F), end='\\\\n\\\\t')\\n original_final_F = df_grouped_by_r_xi.get_group((1, 0))['final_F']\\n sticky_PA_final_F = df_grouped_by_r_xi.get_group((2000, 1))['final_F']\\n print('mean F(1000) for original model: {:>5.3f}'.format(original_final_F.mean()), end='\\\\n\\\\t')\\n print('mean F(1000) for sticky\\/PA model: {:>5.3f}'.format(sticky_PA_final_F.mean()), end='\\\\n\\\\t')\\n \\n mann_whitney_test = stats.mannwhitneyu(sticky_PA_final_F, original_final_F, alternative='two-sided')\\n print('Mann-Whitney U test:')\\n print('\\\\t\\\\tp-value: ', mann_whitney_test.pvalue, end=' ') \\n if mann_whitney_test.pvalue < 10**(-3):\\n print('*' * 3)\\n else:\\n print('')\\n print('\\\\t\\\\tU = ', mann_whitney_test.statistic, end=' ') \\n print('\\\\n')\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Data-Analytics\\/Adult census data.ipynb\\\".\\nThe first task is:\\nWe see that around a number of 9 trees the scores start to saturate, so we can choose this value as a sweet spot. Let's see how our learning curve looks at this point in hyper-parameter space\\nCan you write Python code for it?\\n\",\"targets\":\"\\ntrain_sizes=np.linspace(.1, 1.0, 15)\\nrf_clf.max_depth = 14\\nrf_clf.n_estimators = 9\\nplotLearningCurve(rf_clf, train_feature_df, train_target_df, train_sizes)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"14\\/14 - TF-IDF Homework.ipynb\\\".\\nThe first task is:\\nEveryone seems to start their speeches with \\\"mr chairman\\\" - how many speeches are there total, and many don't mention \\\"chairman\\\" and how many mention neither \\\"mr\\\" nor \\\"chairman\\\"?\\nCan you write Python code for it?\\n\",\"targets\":\"\\ndf['chairman'].value_counts().head(5) #Chairman is NOT mentioned in 250 speeches.\\n\\nlen(df[df['chairman']==0])\\n\\nlen(df[(df['chairman']==0) & (df['mr']==0)])\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Data_types.ipynb\\\".\\nThe first task is:\\nScalaires\\n\\nLes types num\u00e9riques int et float\\nCan you write Python code for it?\\n\",\"targets\":\"\\na = 1 # int\\nb = 1. # float\\nprint a\\nprint b \\n\\n# Les variables sont sensibles \u00e0 la casse\\nprint A \\n\\nprint(1+1.)\\nprint 1+1\\nprint 1\\/2\\nprint 1.\\/2\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"03_Medidas_de_Correlacion.ipynb\\\".\\nThe first task is:\\nHistogramas conjuntos\\nA diferencia del histograma independiente, en este se presenta la probabilidad de ocurrencia de un suceso s\u00f3lo en el caso en que el otro suceso ocurre de manera simult\u00e1nea. Cuando este histograma es igual al histograma independiente se dice que las variables son independientes\\nCan you write Python code for it?\\n\",\"targets\":\"\\ndef Hist2D(x,y,bins = 12):\\n RangoX = np.linspace(x.min(),x.max(),bins)\\n RangoY = np.linspace(y.min(),y.max(),bins)\\n h2d = np.zeros(12,12)\\n for ic,i in enumerate(x):\\n for jc,j in enumerate(y):\\n \\n \\n\\nx = np.random.normal(0,1,1000)\\ny = np.random.uniform(0,1,1000)\\nfig = pl.figure()\\na = pl.hist2d(x,y,bins=12)\\nfig = pl.figure()\\na = pl.scatter(x,y)\\n\\ncont = 0\\nfor i in [1,2,4,12,2]:\\n print i,cont\\n cont-=10\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Carga y prepara el conjunto de datos MNIST\\n\",\"targets\":\"mnist = tf.keras.datasets.mnist\\n\\n(x_train, y_train), (x_test, y_test) = mnist.load_data()\\nx_train, x_test = x_train \\/ 255.0, x_test \\/ 255.0\\n\\n# Agrega una dimension de canales\\nx_train = x_train[..., tf.newaxis]\\nx_test = x_test[..., tf.newaxis]\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"#read csv\\nanscombe_iii = pd.read_csv('..\\/datasets\\/anscombe_iii.csv')\\n\\nplt.scatter(anscombe_iii.x, anscombe_iii.y, color='black')\\nplt.ylabel(\\\"Y\\\")\\nplt.xlabel(\\\"X\\\")\\n\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nIt is intuitive to see that we are overfitting since the high degree polynomial hits every single point (causing our mean squared error (MSE) to be zero), but it would generalize well. For example, if x=5, it would estimate y to be -45 when you would expect it to be above 0.\\nwhen you are dealing with more than one variable, it becomes increasingly difficult to prevent overfitting, since you can not plots past four-five dimensions (x axis,y axis,z axis, color and size). For this reason we should always use cross validation to reduce our variance error (due to overfitting) while we are deducing bias (due to underfitting). Throughout the course we will learn more on what this means, and learn practical tips.\\nThe key takeaway here is more complex models are not always better. Use visualizations and cross validation to prevent overfitting! (We will learn more about this soon!)\\nNow, let us work on the third set of data from quartet\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Analysis\\/Calculations\\/.ipynb_checkpoints\\/Shrink Fit and Liner as Gasket Analysis-checkpoint.ipynb\\\".\\nThe first task is:\\nChange in Liner Thickness Necessary to Achieve Seating Stress\\nThe radial stress due to the compression of the liner follows Hooke's Law:\\n$$\\\\sigma_{PTFE, amb} = \\\\frac{\\\\delta_{Liner, amb}}{t_{amb}}E_{PTFE, amb}$$\\nWhere $t_{amb}$ is the liner thickness at ambient temperature before compression.\\nsolving this equation for the change in liner thickness yields:\\n$$\\\\delta_{Liner, amb} = \\\\frac{\\\\sigma_{PTFE, amb}}{E_{PTFE, amb}}t_{amb}$$\\nCan you write Python code for it?\\n\",\"targets\":\"\\ndeltaLinerAmbient = (sigmaPTFEAmbient\\/EPTFE)*t\\nprint('The change in liner thickness due to compression must be', \\\"%.4f\\\" % deltaLinerAmbient, 'in, in order to achieve a proper seal.')\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"#@test {\\\"skip\\\": true}\\nif files is not None:\\n files.download(checkpoint_zip_filename) # try again if this fails: https:\\/\\/github.com\\/googlecolab\\/colabtools\\/issues\\/469\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nzip\u30d5\u30a1\u30a4\u30eb\u3092\u30c0\u30a6\u30f3\u30ed\u30fc\u30c9\u3057\u307e\u3059\u3002\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# Create your dictionary that maps vocab words to integers here\\nfrom collections import Counter\\nword_counter = Counter(words)\\nword_sorted_set = sorted(word_counter, key=word_counter.get, reverse=True)\\nvocab_to_int = {word:idx for idx,word in enumerate(word_sorted_set)}\\n\\n# Convert the reviews to integers, same shape as reviews list, but with integers\\nreviews_ints = []\\nfor review in reviews:\\n reviews_ints.append([vocab_to_int[word] for word in review.split()])\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nEncoding the words\\nThe embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.\\n\\nExercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.\\nAlso, convert the reviews to integers and store the reviews in a new list called reviews_ints.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Classify\\nWe have obtained our best classifiers above, so we classify the data with each. We will then compare how these classifiers worked with a ROC.\\nFirst we defined our classify and compute_roc functions and then use them below.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\ndef classify(clf=None, save_output=False):\\n\\n # Use the best estimator from above and fit with our trained data and classes\\n clf.fit (labeled_svm, classes)\\n\\n # Attempt a prediction\\n predicted_classes = clf.predict(unlabeled_svm)\\n\\n if save_output:\\n # Creating an array to hold our new predicted values\\n out = np.zeros ((len(pred_class), unlabeled_svm.shape[1] + 1))\\n out[:,:unlabeled_svm.shape[1]] = unlabeled_svm[:][:]\\n out[:, -1] = predicted_classes [:]\\n\\n classifications_filename = \\\"classifications_predicted_{}.csv\\\".format(clf.kernel)\\n \\n # Save our new classifications out to a file\\n np.savetxt(classifications_filename, out, delimiter=\\\",\\\")\\n \\n return predicted_classes\\n\\ndef compute_roc(predicted_classes=None):\\n # Get our mask array of true values\\n predicted_correct_mask = np.array(unlabeled_classes == predicted_classes)\\n\\n # Get the scores from the classifier\\n y_score = clf.decision_function(unlabeled_svm)\\n\\n # Perform the ROC to get false-positives and true-positives\\n fpr, tpr, thresh = metrics.roc_curve(predicted_correct_mask,y_score[:,0])\\n roc_auc = metrics.auc(fpr,tpr)\\n \\n return fpr, tpr, roc_auc\\n\\n# Loop over each SVC type, classify, then compute the ROC\\nfor kernel in best_svcs:\\n clf = kernel.get('clf')\\n\\n print \\\"{} \\\".format(clf.kernel)\\n\\n # Do the actual classifying\\n print \\\"\\\\t Classifying...\\\"\\n pred_class = classify(clf)\\n \\n # Compute the ROC\\n print \\\"\\\\t Computer ROC...\\\"\\n fpr, tpr, auc = compute_roc(pred_class)\\n \\n # Store the predicted classes and the ROC params\\n kernel['predicted_classes'] = pred_class\\n kernel['fpr'] = fpr\\n kernel['tpr'] = tpr\\n kernel['auc'] = auc\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# while True:\\n# print('infinity')\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nThis type of loop is inherently more dangerous than a 'for' loop, because it can easily create a situation where the loop can never exit. In theory, such a loop will run indefinitely, although in practice it will most certainly cause Python to crash. The most dangerous kind of loop is also the simplest:\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Combine model and experiments to produce:\\n - observations dataframe\\n - model function to run experiments and return traces\\n - summary statistics function to accept traces\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nobservations, model, summary_statistics = setup(modelfile,\\n wang_act,\\n wang_inact,\\n courtemanche_kin,\\n courtemanche_deact,\\n courtemanche_rec)\\n\\nassert len(observations)==len(summary_statistics(model({})))\\n\\ng = plot_sim_results(modelfile,\\n wang_act,\\n wang_inact,\\n courtemanche_kin,\\n courtemanche_deact,\\n courtemanche_rec)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"14.3. Alkalinity\\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\\nHow is alkalinity modelled ?\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\n# PROPERTY ID - DO NOT EDIT ! \\nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity') \\n\\n# PROPERTY VALUE: \\n# Set as follows: DOC.set_value(\\\"value\\\") \\n# Valid Choices: \\n# \\\"Prognostic\\\" \\n# \\\"Diagnostic)\\\" \\n# TODO - please enter value(s)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Fall2017\\/Module1_gpu.ipynb\\\".\\nThe first task is:\\nPretty sweet! our neural network seems to have learnt something. Let's see how it does on the overall dataset:\\nCan you write Python code for it?\\n\",\"targets\":\"\\ncorrect = 0.0\\ntotal = 0.0\\nfor cache in cifar_test_loader:\\n img, label = cache \\n op = net(Variable(img))\\n _, pred = torch.max(op.data, 1)\\n total += label.size(0)\\n correct += (pred==label).sum()\\n\\n\\nprint(\\\"accuracy: %f\\\"%(100*(correct\\/total)))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Create a MultiHeadAttention layer to try out. At each location in the sequence, y, the MultiHeadAttention runs all 8 attention heads across all other locations in the sequence, returning a new vector of the same length at each location.\\n\",\"targets\":\"temp_mha = MultiHeadAttention(d_model=512, num_heads=8)\\ny = tf.random.uniform((1, 60, 512)) # (batch_size, encoder_sequence, d_model)\\nout, attn = temp_mha(y, k=y, q=y, mask=None)\\nout.shape, attn.shape\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"

    \\n\\nSources\\nBack to TOC\\n\\nSee Textbook: Numerical Analysis, Timothy Sauer, 2nd Edition, page 305.\\nhttps:\\/\\/en.wikipedia.org\\/wiki\\/Pendulum_(mathematics)\\nhttps:\\/\\/en.wikipedia.org\\/wiki\\/Double_pendulum\\nhttps:\\/\\/scienceworld.wolfram.com\\/physics\\/DoublePendulum.html\\nhttps:\\/\\/demonstrations.wolfram.com\\/DoublePendulum\\/\\nWorld Pendulum Alliance: http:\\/\\/wpa.tecnico.ulisboa.pt\\/~wpa.daemon\\/\\nWorld Pendulum Alliance at USM: http:\\/\\/wpa.tecnico.ulisboa.pt\\/~wpa.daemon\\/hei-partners\\/p10-universidad-tecnica-federico-santa-maria-utfsm\\/\\n\\n
    \\n\\nThe Pendulum\\nBack to TOC\\n\",\"targets\":\"plt.figure(figsize=(8,8))\\nax=plt.gca()\\ntheta0=np.pi\\/4\\nx=np.sin(theta0)\\ny=-np.cos(theta0)\\nplt.plot([0, x],[0, y],'-k')\\nplt.scatter(x, y, s=200, marker='o', c='b')\\nplt.scatter(0, 0, s=200, marker='o', c='k')\\nplt.xlim([-1.5,1.5])\\nplt.ylim([-1.5,0.5])\\nplt.grid(True)\\npatches=[]\\nwedge = mpatches.Wedge((0, 0), 0.7, 270, 270+45, ec=\\\"none\\\")\\npatches.append(wedge)\\ncollection = PatchCollection(patches, cmap=plt.cm.hsv, alpha=0.3)\\nax.add_collection(collection)\\nplt.text(0.1, -0.4, r'$\\\\theta$', fontsize=20)\\nplt.text(0.8, -0.7, r'$m$', fontsize=20)\\nplt.text(0.35, -0.25, r'$l$', fontsize=20)\\nplt.show()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\n
    \\n \\\"View View source code<\\/a>\\n <\\/td>\\n<\\/table>\\n\\n\\nExample 3: FlatMap with a lambda function\\nFor this example, we want to flatten a PCollection of lists of strs into a PCollection of strs.\\nEach input element is already an iterable, where each element is what we want in the resulting PCollection.\\nWe use a lambda function that returns the same input element it received.\\n\",\"targets\":\"import apache_beam as beam\\n\\nwith beam.Pipeline() as pipeline:\\n plants = (\\n pipeline\\n | 'Gardening plants' >> beam.Create([\\n ['\ud83c\udf53Strawberry', '\ud83e\udd55Carrot', '\ud83c\udf46Eggplant'],\\n ['\ud83c\udf45Tomato', '\ud83e\udd54Potato'],\\n ])\\n | 'Flatten lists' >> beam.FlatMap(lambda elements: elements)\\n | beam.Map(print)\\n )\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"1.5 get_state\\nThe get_state method extracts features from within the environments and provides then as inputs to the policy provided by the RL agent. Several helper methods exist within flow to help facilitate this process. Some useful helper method can be accessed from the following objects:\\n* self.k.vehicle: provides current state information for all vehicles within the network\\n* self.k.traffic_light: provides state information on the traffic lights\\n* self.k.scenario: information on the scenario, which unlike the vehicles and traffic lights is static\\n* More accessor objects and methods can be found within the Flow documentation at: http:\\/\\/berkeleyflow.readthedocs.io\\/en\\/latest\\/\\nIn order to model global observability within the network, our state space consists of the speeds and positions of all vehicles (as mentioned in section 1.3). This is implemented as follows:\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nimport numpy as np\\n\\nclass myEnv(myEnv): # update my environment class\\n\\n def get_state(self, **kwargs):\\n # the get_ids() method is used to get the names of all vehicles in the network\\n ids = self.k.vehicle.get_ids()\\n\\n # we use the get_absolute_position method to get the positions of all vehicles\\n pos = [self.k.vehicle.get_x_by_id(veh_id) for veh_id in ids]\\n\\n # we use the get_speed method to get the velocities of all vehicles\\n vel = [self.k.vehicle.get_speed(veh_id) for veh_id in ids]\\n\\n # the speeds and positions are concatenated to produce the state\\n return np.concatenate((pos, vel))\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Applying to Test Dataset\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nblind_data = pd.read_csv('..\\/nofacies_data.csv')\\n\\nX_blind = blind_data.drop(['Formation', 'Well Name', 'Depth'], axis=1).values\\nwell_blind = blind_data['Well Name'].values\\ndepth_blind = blind_data['Depth'].values\\n\\nX_blind, padded_rows = augment_features(X_blind, well_blind, depth_blind, N_neig=1)\\n\\n# Scaling\\nscl = preprocessing.RobustScaler().fit(X)\\nX_train = scl.transform(X)\\nX_blind = scl.transform(X_blind)\\n\\nY_train = np_utils.to_categorical(y, nb_classes)\\n\\nin_dim = len(X_train[0])\\n\\n# Method initialization\\nmodel = fDNN(in_dim, nb_classes)\\n\\n# Training\\nmodel.fit(X_train, Y_train, nb_epoch=epoch, batch_size=bats, verbose=0) \\n\\n# Predict\\ny_blind = model.predict_classes(X_blind, verbose=0)\\ny_blind = medfilt(y_blind, kernel_size=5)\\n\\nblind_data[\\\"Facies\\\"] = y_blind + 1 # return the original value (1-9)\\n\\nblind_data.to_csv(\\\"PA_Team_Submission_3_without_outlier_data.csv\\\")\\n\\nmake_facies_log_plot(\\n blind_data[blind_data['Well Name'] == 'STUART'],\\n facies_colors)\\n\\nmake_facies_log_plot(\\n blind_data[blind_data['Well Name'] == 'CRAWFORD'],\\n facies_colors)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# Import everything needed to edit\\/save\\/watch video clips\\nfrom moviepy.editor import VideoFileClip\\nfrom IPython.display import HTML\\n\\nimport matplotlib.pyplot as plt\\nimport matplotlib.image as mpimg\\nimport numpy as np\\nimport cv2\\n\\ndef process_image(image):\\n gray = grayscale(image)\\n\\n # Define a kernel size and apply Gaussian smoothing\\n kernel_size = 5\\n blur_gray = gaussian_blur( gray, kernel_size )\\n\\n # Define our parameters for Canny and apply\\n low_threshold = 50\\n high_threshold = 150\\n edges = canny(blur_gray, low_threshold, high_threshold)\\n\\n # Next we'll create a masked edges image using cv2.fillPoly()\\n mask = np.zeros_like(edges) \\n ignore_mask_color = 255 \\n\\n # This time we are defining a four sided polygon to mask\\n imshape = image.shape\\n# Regular vid ROIs\\n vertices = np.array([[( 125, imshape[0] ), ( 400, 340 ), ( 520, 340 ), ( 900, imshape[0])]], dtype=np.int32)\\n# Challenge ROI\\n# vertices = np.array([[( 155 + 80, imshape[0] - 60 ), ( 590, 450 ), ( 750, 450 ), ( 1200 - 60, imshape[0] - 60)]], dtype=np.int32)\\n masked_edges = region_of_interest( edges, vertices )\\n \\n # Define the Hough transform parameters\\n # Make a blank the same size as our image to draw on\\n rho = 1 # distance resolution in pixels of the Hough grid\\n theta = np.pi\\/180 # angular resolution in radians of the Hough grid\\n threshold = 10 # minimum number of votes (intersections in Hough grid cell)\\n min_line_length = 15 #minimum number of pixels making up a line\\n max_line_gap = 1 # maximum gap in pixels between connectable line segments\\n line_image = np.copy(image)*0 # creating a blank to draw lines on\\n\\n # Run Hough on edge detected image\\n line_image = hough_lines( masked_edges, rho, theta, threshold, min_line_length, max_line_gap, vertices, False )\\n \\n # Create a \\\"color\\\" binary image to combine with line image\\n color_edges = np.dstack((edges, edges, edges)) \\n lines_edges = weighted_img( image, line_image)\\n\\n return lines_edges\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nTest on Videos\\nYou know what's cooler than drawing lanes over images? Drawing lanes over video!\\nWe can test our solution on two provided videos:\\nsolidWhiteRight.mp4\\nsolidYellowLeft.mp4\\nNote: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.\\nIf you get an error that looks like this:\\nNeedDownloadError: Need ffmpeg exe. \\nYou can download it by calling: \\nimageio.plugins.ffmpeg.download()\\nFollow the instructions in the error message and check out this forum post for more troubleshooting tips across operating systems.\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Applying DICS spatial filters separately to the CSD computed using the\\nbaseline and the CSD computed during the ERS activity.\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nbaseline_source_power, freqs = apply_dics_csd(csd_baseline, filters)\\nbeta_source_power, freqs = apply_dics_csd(csd_ers, filters)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Canny Edge Detection\\n\\nNoise Reduction\\nFinding Intensity Gradient of the Image\\nNon-maximum Suppression\\nHysteresis Thresholding\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nedges = cv2.Canny(bgr,100,200)\\n\\nplt.imshow(edges, cmap='gray')\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"After that, let's make an operation over these variables. The function tf.add() adds two elements (you could also use c = a + b).\\n\",\"targets\":\"c = tf.add(a,b)\\n#c = a + b is also a way to define the sum of the terms\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# Task 3\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nTask 3: Is Steve Safe?\\nFor this task we will write a function named isSafe that will take a parameter position and return a value False if the input parameter position is above air or water and will return the value True otherwise. We will use the statements from Task 1 to write this function. Use return to return a value from this function. The outline of the function you need to write is below. Try to complete the function in the code block below.\\n```python\\ndef isSafe(position):\\n b = mc.getBlock(position.x,position.y-1,position.z)\\n if b == ?:\\n return False\\n if b == ?:\\n return False\\n return True\\n```\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"Constrained loading\\nFor large datasets, it is possible to constrain the load to match specific metadata or to load only a subset of the data. It is recommended as the loading is faster with given constraints.\\nThe following example loads only the \\\"IJ_AVG_S__O3\\\" variable:\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nfilename = 'netcdf\\/v10-01c-geosfp-Run1.20120801.nc'\\nclb = datasets.load_callbacks['gamap_bpch2coards']\\ndataset = datasets.load(filename, \\\"IJ_AVG_S__O3\\\",\\n callback=clb)\\n\\nprint dataset\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"# add squared distance from origin as third feature\\nX_extra = np.hstack((X, X[:, [0]]**2 + X[:, [1]]**2))\\n\\nplot_learning_curve(LinearSVC(C=0.25), \\\"LinearSVC(C=0.25) + distance feature\\\", \\n X_extra, y, ylim=(0.5, 1.0),\\n train_sizes=np.linspace(.1, 1.0, 5))\\nGiven the above code, generate some markdown instructions for it.\\n\",\"targets\":\"\\nThis data is clearly not linearly separable; more data or less features cannot help. Our model is wrong; thus the underfitting.\\nAdressing underfitting\\nWays to decrease underfitting:\\n * use more or better features (the distance from the origin should help!)\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"genmarkdown\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"Import dependencies. <\\/b>\\n\",\"targets\":\"# from apache_beam.options.pipeline_options import PipelineOptions\\n# from apache_beam.options.pipeline_options import GoogleCloudOptions\\n# from apache_beam.options.pipeline_options import StandardOptions\\n# import apache_beam as beam\\nfrom tensorflow.core.example import example_pb2\\nimport tensorflow as tf\\nimport time\\n\\nfrom proto import version_config_pb2\\nfrom proto.stu3 import fhirproto_extensions_pb2\\nfrom proto.stu3 import resources_pb2\\n\\nfrom google.protobuf import text_format\\nfrom py.google.fhir.labels import label\\nfrom py.google.fhir.labels import bundle_to_label\\nfrom py.google.fhir.seqex import bundle_to_seqex\\nfrom py.google.fhir.models import model\\nfrom py.google.fhir.models.model import make_estimator\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"markdowncode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"02_data_representation\\/weather_search\\/wx_embeddings.ipynb\\\".\\nThe first task is:\\nClustering the embeddings\\nIf the differences between images are meaningful, then it makes sense that we could cluster the images using just the embeddings.\\nLet's do K-Means clustering into 5 categories and visualize the five centroids.\\nCan you write Python code for it?\\n\",\"targets\":\"\\n# Unfortunately, BigQueryML does not accept arrays as input\\n# so we convert it into a struct. Generate the boilerplate code ...\\ndef create_array_to_struct(N):\\n sql = (\\\"\\\"\\\"\\nCREATE TEMPORARY FUNCTION arr_to_input(arr ARRAY)\\nRETURNS \\nSTRUCT<\\\"\\\"\\\" \\n + ', '.join([\\\"u{} FLOAT64\\\".format(idx+1) for idx in range(N)]) + \\\">\\\\n\\\"\\n + \\\"AS (STRUCT(\\\\n\\\"\\n + ', '.join([\\\"arr[OFFSET({})]\\\".format(idx) for idx in range(N)])\\n + \\\"\\\\n));\\\"\\n )\\n return sql\\n \\nprint(create_array_to_struct(50))\\n\\n%%bigquery\\n\\n-- Unfortunately, BigQueryML does not accept arrays as input, so we convert it into a struct\\n\\nCREATE TEMPORARY FUNCTION arr_to_input(arr ARRAY)\\nRETURNS \\nSTRUCT\\n\\nAS (\\nSTRUCT(\\n arr[OFFSET(0)], arr[OFFSET(1)], arr[OFFSET(2)], arr[OFFSET(3)], arr[OFFSET(4)]\\n , arr[OFFSET(5)], arr[OFFSET(6)], arr[OFFSET(7)], arr[OFFSET(8)], arr[OFFSET(9)]\\n , arr[OFFSET(10)], arr[OFFSET(11)], arr[OFFSET(12)], arr[OFFSET(13)], arr[OFFSET(14)]\\n , arr[OFFSET(15)], arr[OFFSET(16)], arr[OFFSET(17)], arr[OFFSET(18)], arr[OFFSET(19)]\\n , arr[OFFSET(20)], arr[OFFSET(21)], arr[OFFSET(22)], arr[OFFSET(23)], arr[OFFSET(24)]\\n , arr[OFFSET(25)], arr[OFFSET(26)], arr[OFFSET(27)], arr[OFFSET(28)], arr[OFFSET(29)]\\n , arr[OFFSET(30)], arr[OFFSET(31)], arr[OFFSET(32)], arr[OFFSET(33)], arr[OFFSET(34)]\\n , arr[OFFSET(35)], arr[OFFSET(36)], arr[OFFSET(37)], arr[OFFSET(38)], arr[OFFSET(39)]\\n ...\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"\\\"$${\\\\frac{\\\\mathrm{d[A]}}{\\\\mathrm{d}t}=1.0-1.0\\\\mathrm{[A]}}$$\\\"\\nPlease write code following the instructions in jupyter notebook style.\\n\",\"targets\":\"\\nm = get_model()\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"code\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"I am working on the file \\\"Clase 05 - Manejo de strings y Archivos de texto .ipynb\\\".\\nThe first task is:\\nY tambi\u00e9n podemos separar y combinar strings:\\nCan you write Python code for it?\\n\",\"targets\":\"\\nprint \\\"Hola mundo\\\".split()\\nprint \\\"Hola mundo\\\".split('o')\\nprint \\\"Hola mundo\\\".split('mu')\\nprint ''.join(['Hola', 'mundo'])\\nprint ' '.join(['Hola', 'mundo'])\\nvar = '#separador#'.join(['Hola', 'mundo'])\\nprint var\\n\\npadron, nombre, nota = '12321,nom bekr,4'.split(',')\",\"language\":\"jupyter-notebook\",\"split\":\"train\",\"template\":\"taskcode\",\"dataset\":\"codeparrot\\/github-jupyter-text-code-pairs\",\"config\":null}\n" "{\"inputs\":\"
    \\n\\nEXERCISE<\\/b>:\\n\\n